2026-04-05 01:46:13.402576 | Job console starting 2026-04-05 01:46:13.412795 | Updating git repos 2026-04-05 01:46:13.466104 | Cloning repos into workspace 2026-04-05 01:46:13.694812 | Restoring repo states 2026-04-05 01:46:13.714987 | Merging changes 2026-04-05 01:46:13.715030 | Checking out repos 2026-04-05 01:46:13.963655 | Preparing playbooks 2026-04-05 01:46:14.562101 | Running Ansible setup 2026-04-05 01:46:18.897157 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-05 01:46:19.670113 | 2026-04-05 01:46:19.670296 | PLAY [Base pre] 2026-04-05 01:46:19.687784 | 2026-04-05 01:46:19.687925 | TASK [Setup log path fact] 2026-04-05 01:46:19.718020 | orchestrator | ok 2026-04-05 01:46:19.736184 | 2026-04-05 01:46:19.736329 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 01:46:19.776038 | orchestrator | ok 2026-04-05 01:46:19.788603 | 2026-04-05 01:46:19.788720 | TASK [emit-job-header : Print job information] 2026-04-05 01:46:19.839310 | # Job Information 2026-04-05 01:46:19.839586 | Ansible Version: 2.16.14 2026-04-05 01:46:19.839647 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-04-05 01:46:19.839704 | Pipeline: periodic-midnight 2026-04-05 01:46:19.839744 | Executor: 521e9411259a 2026-04-05 01:46:19.839781 | Triggered by: https://github.com/osism/testbed 2026-04-05 01:46:19.839820 | Event ID: 1928b94beaae403ebd11dd0b50186fab 2026-04-05 01:46:19.855890 | 2026-04-05 01:46:19.856076 | LOOP [emit-job-header : Print node information] 2026-04-05 01:46:20.001642 | orchestrator | ok: 2026-04-05 01:46:20.002090 | orchestrator | # Node Information 2026-04-05 01:46:20.002202 | orchestrator | Inventory Hostname: orchestrator 2026-04-05 01:46:20.002266 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-05 01:46:20.002327 | orchestrator | Username: zuul-testbed06 2026-04-05 01:46:20.002383 | orchestrator | Distro: Debian 12.13 2026-04-05 01:46:20.002444 | orchestrator | Provider: static-testbed 2026-04-05 01:46:20.002500 | orchestrator | Region: 2026-04-05 01:46:20.002555 | orchestrator | Label: testbed-orchestrator 2026-04-05 01:46:20.002608 | orchestrator | Product Name: OpenStack Nova 2026-04-05 01:46:20.002657 | orchestrator | Interface IP: 81.163.193.140 2026-04-05 01:46:20.024096 | 2026-04-05 01:46:20.024243 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-05 01:46:20.588653 | orchestrator -> localhost | changed 2026-04-05 01:46:20.607614 | 2026-04-05 01:46:20.607818 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-05 01:46:21.657895 | orchestrator -> localhost | changed 2026-04-05 01:46:21.672555 | 2026-04-05 01:46:21.672674 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-05 01:46:21.972174 | orchestrator -> localhost | ok 2026-04-05 01:46:21.984470 | 2026-04-05 01:46:21.984633 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-05 01:46:22.022245 | orchestrator | ok 2026-04-05 01:46:22.043447 | orchestrator | included: /var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-05 01:46:22.051631 | 2026-04-05 01:46:22.051729 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-05 01:46:23.199501 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-05 01:46:23.200055 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/work/03468ae7aa2d4f669cb72cd41f266296_id_rsa 2026-04-05 01:46:23.200255 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/work/03468ae7aa2d4f669cb72cd41f266296_id_rsa.pub 2026-04-05 01:46:23.200344 | orchestrator -> localhost | The key fingerprint is: 2026-04-05 01:46:23.200415 | orchestrator -> localhost | SHA256:F54JtyEAsCf8abJwLlkkg2/dD8k/byARa1/wlK66k30 zuul-build-sshkey 2026-04-05 01:46:23.200481 | orchestrator -> localhost | The key's randomart image is: 2026-04-05 01:46:23.200566 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-05 01:46:23.200633 | orchestrator -> localhost | | ..... . | 2026-04-05 01:46:23.200697 | orchestrator -> localhost | |.. . ... o | 2026-04-05 01:46:23.200756 | orchestrator -> localhost | |+ = . oo=+ | 2026-04-05 01:46:23.200814 | orchestrator -> localhost | | = = ++. =+* | 2026-04-05 01:46:23.200873 | orchestrator -> localhost | |. * =.=oSo* | 2026-04-05 01:46:23.200994 | orchestrator -> localhost | | B + .++. | 2026-04-05 01:46:23.201058 | orchestrator -> localhost | |o o =+. | 2026-04-05 01:46:23.201115 | orchestrator -> localhost | | . + .oE | 2026-04-05 01:46:23.201176 | orchestrator -> localhost | | .o o. | 2026-04-05 01:46:23.201234 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-05 01:46:23.201368 | orchestrator -> localhost | ok: Runtime: 0:00:00.629110 2026-04-05 01:46:23.218132 | 2026-04-05 01:46:23.218315 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-05 01:46:23.266923 | orchestrator | ok 2026-04-05 01:46:23.282225 | orchestrator | included: /var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-05 01:46:23.292766 | 2026-04-05 01:46:23.292877 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-05 01:46:23.318349 | orchestrator | skipping: Conditional result was False 2026-04-05 01:46:23.333339 | 2026-04-05 01:46:23.333485 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-05 01:46:23.968871 | orchestrator | changed 2026-04-05 01:46:23.975698 | 2026-04-05 01:46:23.975819 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-05 01:46:24.276128 | orchestrator | ok 2026-04-05 01:46:24.284819 | 2026-04-05 01:46:24.284976 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-05 01:46:24.702730 | orchestrator | ok 2026-04-05 01:46:24.709000 | 2026-04-05 01:46:24.709119 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-05 01:46:25.203487 | orchestrator | ok 2026-04-05 01:46:25.213116 | 2026-04-05 01:46:25.213268 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-05 01:46:25.239275 | orchestrator | skipping: Conditional result was False 2026-04-05 01:46:25.253262 | 2026-04-05 01:46:25.253427 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-05 01:46:25.745659 | orchestrator -> localhost | changed 2026-04-05 01:46:25.771732 | 2026-04-05 01:46:25.771894 | TASK [add-build-sshkey : Add back temp key] 2026-04-05 01:46:26.120559 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/work/03468ae7aa2d4f669cb72cd41f266296_id_rsa (zuul-build-sshkey) 2026-04-05 01:46:26.120850 | orchestrator -> localhost | ok: Runtime: 0:00:00.018776 2026-04-05 01:46:26.128645 | 2026-04-05 01:46:26.128763 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-05 01:46:26.568651 | orchestrator | ok 2026-04-05 01:46:26.577560 | 2026-04-05 01:46:26.577699 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-05 01:46:26.613112 | orchestrator | skipping: Conditional result was False 2026-04-05 01:46:26.675994 | 2026-04-05 01:46:26.676126 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-05 01:46:27.123839 | orchestrator | ok 2026-04-05 01:46:27.135299 | 2026-04-05 01:46:27.135420 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-05 01:46:27.164482 | orchestrator | ok 2026-04-05 01:46:27.171852 | 2026-04-05 01:46:27.171997 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-05 01:46:27.481562 | orchestrator -> localhost | ok 2026-04-05 01:46:27.497852 | 2026-04-05 01:46:27.498068 | TASK [validate-host : Collect information about the host] 2026-04-05 01:46:28.793408 | orchestrator | ok 2026-04-05 01:46:28.808774 | 2026-04-05 01:46:28.808898 | TASK [validate-host : Sanitize hostname] 2026-04-05 01:46:28.883576 | orchestrator | ok 2026-04-05 01:46:28.890916 | 2026-04-05 01:46:28.891060 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-05 01:46:29.454689 | orchestrator -> localhost | changed 2026-04-05 01:46:29.468922 | 2026-04-05 01:46:29.469105 | TASK [validate-host : Collect information about zuul worker] 2026-04-05 01:46:29.940788 | orchestrator | ok 2026-04-05 01:46:29.949381 | 2026-04-05 01:46:29.949535 | TASK [validate-host : Write out all zuul information for each host] 2026-04-05 01:46:30.561283 | orchestrator -> localhost | changed 2026-04-05 01:46:30.572375 | 2026-04-05 01:46:30.572493 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-05 01:46:30.879625 | orchestrator | ok 2026-04-05 01:46:30.888214 | 2026-04-05 01:46:30.888348 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-05 01:46:55.518128 | orchestrator | changed: 2026-04-05 01:46:55.518378 | orchestrator | .d..t...... src/ 2026-04-05 01:46:55.518419 | orchestrator | .d..t...... src/github.com/ 2026-04-05 01:46:55.518446 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-05 01:46:55.518470 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-05 01:46:55.518492 | orchestrator | RedHat.yml 2026-04-05 01:46:55.533789 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-05 01:46:55.533806 | orchestrator | RedHat.yml 2026-04-05 01:46:55.533857 | orchestrator | = 2.2.0"... 2026-04-05 01:47:05.784759 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-05 01:47:05.800075 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-05 01:47:06.257139 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-05 01:47:06.927799 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 01:47:07.315723 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-05 01:47:07.839483 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-05 01:47:08.323145 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-05 01:47:09.101401 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-05 01:47:09.101480 | orchestrator | 2026-04-05 01:47:09.101488 | orchestrator | Providers are signed by their developers. 2026-04-05 01:47:09.101494 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-05 01:47:09.101500 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-05 01:47:09.101506 | orchestrator | 2026-04-05 01:47:09.101510 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-05 01:47:09.101531 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-05 01:47:09.101536 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-05 01:47:09.101540 | orchestrator | you run "tofu init" in the future. 2026-04-05 01:47:09.101871 | orchestrator | 2026-04-05 01:47:09.101912 | orchestrator | OpenTofu has been successfully initialized! 2026-04-05 01:47:09.101949 | orchestrator | 2026-04-05 01:47:09.101954 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-05 01:47:09.101958 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-05 01:47:09.101962 | orchestrator | should now work. 2026-04-05 01:47:09.101966 | orchestrator | 2026-04-05 01:47:09.101970 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-05 01:47:09.101974 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-05 01:47:09.101979 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-05 01:47:09.272299 | orchestrator | Created and switched to workspace "ci"! 2026-04-05 01:47:09.272479 | orchestrator | 2026-04-05 01:47:09.272498 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-05 01:47:09.272507 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-05 01:47:09.272516 | orchestrator | for this configuration. 2026-04-05 01:47:09.437918 | orchestrator | ci.auto.tfvars 2026-04-05 01:47:09.444578 | orchestrator | default_custom.tf 2026-04-05 01:47:10.507476 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-05 01:47:11.082105 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-05 01:47:11.332540 | orchestrator | 2026-04-05 01:47:11.332636 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-05 01:47:11.332649 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-05 01:47:11.332658 | orchestrator | + create 2026-04-05 01:47:11.332668 | orchestrator | <= read (data resources) 2026-04-05 01:47:11.332678 | orchestrator | 2026-04-05 01:47:11.332688 | orchestrator | OpenTofu will perform the following actions: 2026-04-05 01:47:11.332707 | orchestrator | 2026-04-05 01:47:11.332715 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-05 01:47:11.332725 | orchestrator | # (config refers to values not yet known) 2026-04-05 01:47:11.332734 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-05 01:47:11.332743 | orchestrator | + checksum = (known after apply) 2026-04-05 01:47:11.332752 | orchestrator | + created_at = (known after apply) 2026-04-05 01:47:11.332761 | orchestrator | + file = (known after apply) 2026-04-05 01:47:11.332770 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.332807 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.332816 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 01:47:11.332825 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 01:47:11.332834 | orchestrator | + most_recent = true 2026-04-05 01:47:11.332842 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.332851 | orchestrator | + protected = (known after apply) 2026-04-05 01:47:11.332860 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.332871 | orchestrator | + schema = (known after apply) 2026-04-05 01:47:11.332880 | orchestrator | + size_bytes = (known after apply) 2026-04-05 01:47:11.332889 | orchestrator | + tags = (known after apply) 2026-04-05 01:47:11.332957 | orchestrator | + updated_at = (known after apply) 2026-04-05 01:47:11.332965 | orchestrator | } 2026-04-05 01:47:11.332979 | orchestrator | 2026-04-05 01:47:11.332987 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-05 01:47:11.332996 | orchestrator | # (config refers to values not yet known) 2026-04-05 01:47:11.333004 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-05 01:47:11.333012 | orchestrator | + checksum = (known after apply) 2026-04-05 01:47:11.333019 | orchestrator | + created_at = (known after apply) 2026-04-05 01:47:11.333027 | orchestrator | + file = (known after apply) 2026-04-05 01:47:11.333034 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333041 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.333049 | orchestrator | + min_disk_gb = (known after apply) 2026-04-05 01:47:11.333056 | orchestrator | + min_ram_mb = (known after apply) 2026-04-05 01:47:11.333064 | orchestrator | + most_recent = true 2026-04-05 01:47:11.333072 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.333080 | orchestrator | + protected = (known after apply) 2026-04-05 01:47:11.333087 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.333095 | orchestrator | + schema = (known after apply) 2026-04-05 01:47:11.333103 | orchestrator | + size_bytes = (known after apply) 2026-04-05 01:47:11.333111 | orchestrator | + tags = (known after apply) 2026-04-05 01:47:11.333119 | orchestrator | + updated_at = (known after apply) 2026-04-05 01:47:11.333127 | orchestrator | } 2026-04-05 01:47:11.333135 | orchestrator | 2026-04-05 01:47:11.333143 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-05 01:47:11.333151 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-05 01:47:11.333160 | orchestrator | + content = (known after apply) 2026-04-05 01:47:11.333168 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 01:47:11.333176 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 01:47:11.333184 | orchestrator | + content_md5 = (known after apply) 2026-04-05 01:47:11.333192 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 01:47:11.333200 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 01:47:11.333208 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 01:47:11.333216 | orchestrator | + directory_permission = "0777" 2026-04-05 01:47:11.333224 | orchestrator | + file_permission = "0644" 2026-04-05 01:47:11.333232 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-05 01:47:11.333240 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333248 | orchestrator | } 2026-04-05 01:47:11.333255 | orchestrator | 2026-04-05 01:47:11.333263 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-05 01:47:11.333270 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-05 01:47:11.333277 | orchestrator | + content = (known after apply) 2026-04-05 01:47:11.333285 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 01:47:11.333295 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 01:47:11.333303 | orchestrator | + content_md5 = (known after apply) 2026-04-05 01:47:11.333313 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 01:47:11.333321 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 01:47:11.333342 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 01:47:11.333349 | orchestrator | + directory_permission = "0777" 2026-04-05 01:47:11.333357 | orchestrator | + file_permission = "0644" 2026-04-05 01:47:11.333388 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-05 01:47:11.333396 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333403 | orchestrator | } 2026-04-05 01:47:11.333415 | orchestrator | 2026-04-05 01:47:11.333423 | orchestrator | # local_file.inventory will be created 2026-04-05 01:47:11.333430 | orchestrator | + resource "local_file" "inventory" { 2026-04-05 01:47:11.333437 | orchestrator | + content = (known after apply) 2026-04-05 01:47:11.333445 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 01:47:11.333451 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 01:47:11.333458 | orchestrator | + content_md5 = (known after apply) 2026-04-05 01:47:11.333465 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 01:47:11.333475 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 01:47:11.333482 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 01:47:11.333489 | orchestrator | + directory_permission = "0777" 2026-04-05 01:47:11.333496 | orchestrator | + file_permission = "0644" 2026-04-05 01:47:11.333503 | orchestrator | + filename = "inventory.ci" 2026-04-05 01:47:11.333509 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333516 | orchestrator | } 2026-04-05 01:47:11.333524 | orchestrator | 2026-04-05 01:47:11.333531 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-05 01:47:11.333538 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-05 01:47:11.333545 | orchestrator | + content = (sensitive value) 2026-04-05 01:47:11.333553 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-05 01:47:11.333560 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-05 01:47:11.333568 | orchestrator | + content_md5 = (known after apply) 2026-04-05 01:47:11.333576 | orchestrator | + content_sha1 = (known after apply) 2026-04-05 01:47:11.333584 | orchestrator | + content_sha256 = (known after apply) 2026-04-05 01:47:11.333591 | orchestrator | + content_sha512 = (known after apply) 2026-04-05 01:47:11.333599 | orchestrator | + directory_permission = "0700" 2026-04-05 01:47:11.333607 | orchestrator | + file_permission = "0600" 2026-04-05 01:47:11.333612 | orchestrator | + filename = ".id_rsa.ci" 2026-04-05 01:47:11.333617 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333621 | orchestrator | } 2026-04-05 01:47:11.333626 | orchestrator | 2026-04-05 01:47:11.333630 | orchestrator | # null_resource.node_semaphore will be created 2026-04-05 01:47:11.333635 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-05 01:47:11.333640 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333645 | orchestrator | } 2026-04-05 01:47:11.333649 | orchestrator | 2026-04-05 01:47:11.333654 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-05 01:47:11.333659 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-05 01:47:11.333664 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.333668 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.333673 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333678 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.333682 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.333687 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-05 01:47:11.333692 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.333697 | orchestrator | + size = 80 2026-04-05 01:47:11.333702 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.333706 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.333711 | orchestrator | } 2026-04-05 01:47:11.333715 | orchestrator | 2026-04-05 01:47:11.333720 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-05 01:47:11.333725 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 01:47:11.333729 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.333734 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.333739 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333749 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.333754 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.333759 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-05 01:47:11.333764 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.333769 | orchestrator | + size = 80 2026-04-05 01:47:11.333773 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.333778 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.333783 | orchestrator | } 2026-04-05 01:47:11.333787 | orchestrator | 2026-04-05 01:47:11.333792 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-05 01:47:11.333796 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 01:47:11.333801 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.333806 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.333810 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333815 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.333820 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.333825 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-05 01:47:11.333830 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.333834 | orchestrator | + size = 80 2026-04-05 01:47:11.333839 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.333843 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.333848 | orchestrator | } 2026-04-05 01:47:11.333853 | orchestrator | 2026-04-05 01:47:11.333857 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-05 01:47:11.333862 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 01:47:11.333866 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.333871 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.333876 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333880 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.333885 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.333889 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-05 01:47:11.333917 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.333925 | orchestrator | + size = 80 2026-04-05 01:47:11.333935 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.333940 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.333945 | orchestrator | } 2026-04-05 01:47:11.333954 | orchestrator | 2026-04-05 01:47:11.333959 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-05 01:47:11.333964 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 01:47:11.333968 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.333973 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.333978 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.333983 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.333987 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.333992 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-05 01:47:11.333997 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334001 | orchestrator | + size = 80 2026-04-05 01:47:11.334006 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334033 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334040 | orchestrator | } 2026-04-05 01:47:11.334044 | orchestrator | 2026-04-05 01:47:11.334049 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-05 01:47:11.334054 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 01:47:11.334058 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334063 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334067 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334077 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.334082 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334086 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-05 01:47:11.334092 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334096 | orchestrator | + size = 80 2026-04-05 01:47:11.334101 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334106 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334111 | orchestrator | } 2026-04-05 01:47:11.334115 | orchestrator | 2026-04-05 01:47:11.334120 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-05 01:47:11.334124 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-05 01:47:11.334129 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334134 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334138 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334143 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.334147 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334152 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-05 01:47:11.334157 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334161 | orchestrator | + size = 80 2026-04-05 01:47:11.334166 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334170 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334175 | orchestrator | } 2026-04-05 01:47:11.334179 | orchestrator | 2026-04-05 01:47:11.334184 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-05 01:47:11.334189 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334194 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334198 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334203 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334208 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334212 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-05 01:47:11.334217 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334222 | orchestrator | + size = 20 2026-04-05 01:47:11.334226 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334231 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334236 | orchestrator | } 2026-04-05 01:47:11.334240 | orchestrator | 2026-04-05 01:47:11.334245 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-05 01:47:11.334250 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334254 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334259 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334263 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334268 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334272 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-05 01:47:11.334277 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334282 | orchestrator | + size = 20 2026-04-05 01:47:11.334286 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334291 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334295 | orchestrator | } 2026-04-05 01:47:11.334300 | orchestrator | 2026-04-05 01:47:11.334305 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-05 01:47:11.334309 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334314 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334319 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334323 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334328 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334333 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-05 01:47:11.334337 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334348 | orchestrator | + size = 20 2026-04-05 01:47:11.334352 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334357 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334362 | orchestrator | } 2026-04-05 01:47:11.334366 | orchestrator | 2026-04-05 01:47:11.334371 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-05 01:47:11.334375 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334380 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334384 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334389 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334397 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334402 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-05 01:47:11.334407 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334411 | orchestrator | + size = 20 2026-04-05 01:47:11.334416 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334421 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334425 | orchestrator | } 2026-04-05 01:47:11.334430 | orchestrator | 2026-04-05 01:47:11.334439 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-05 01:47:11.334444 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334449 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334453 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334458 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334462 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334467 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-05 01:47:11.334472 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334476 | orchestrator | + size = 20 2026-04-05 01:47:11.334481 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334485 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334490 | orchestrator | } 2026-04-05 01:47:11.334495 | orchestrator | 2026-04-05 01:47:11.334499 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-05 01:47:11.334504 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334509 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334513 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334518 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334522 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334527 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-05 01:47:11.334531 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334536 | orchestrator | + size = 20 2026-04-05 01:47:11.334541 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334545 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334550 | orchestrator | } 2026-04-05 01:47:11.334554 | orchestrator | 2026-04-05 01:47:11.334559 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-05 01:47:11.334564 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334568 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334573 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334578 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334582 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334587 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-05 01:47:11.334591 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334596 | orchestrator | + size = 20 2026-04-05 01:47:11.334601 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334605 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334610 | orchestrator | } 2026-04-05 01:47:11.334614 | orchestrator | 2026-04-05 01:47:11.334619 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-05 01:47:11.334624 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334633 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334638 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334643 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334647 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334652 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-05 01:47:11.334656 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334661 | orchestrator | + size = 20 2026-04-05 01:47:11.334666 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334670 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334675 | orchestrator | } 2026-04-05 01:47:11.334680 | orchestrator | 2026-04-05 01:47:11.334684 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-05 01:47:11.334689 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-05 01:47:11.334693 | orchestrator | + attachment = (known after apply) 2026-04-05 01:47:11.334698 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334703 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334707 | orchestrator | + metadata = (known after apply) 2026-04-05 01:47:11.334712 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-05 01:47:11.334716 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334721 | orchestrator | + size = 20 2026-04-05 01:47:11.334725 | orchestrator | + volume_retype_policy = "never" 2026-04-05 01:47:11.334730 | orchestrator | + volume_type = "ssd" 2026-04-05 01:47:11.334735 | orchestrator | } 2026-04-05 01:47:11.334739 | orchestrator | 2026-04-05 01:47:11.334744 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-05 01:47:11.334749 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-05 01:47:11.334753 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 01:47:11.334758 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 01:47:11.334763 | orchestrator | + all_metadata = (known after apply) 2026-04-05 01:47:11.334767 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.334772 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.334776 | orchestrator | + config_drive = true 2026-04-05 01:47:11.334784 | orchestrator | + created = (known after apply) 2026-04-05 01:47:11.334788 | orchestrator | + flavor_id = (known after apply) 2026-04-05 01:47:11.334793 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-05 01:47:11.334798 | orchestrator | + force_delete = false 2026-04-05 01:47:11.334802 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 01:47:11.334807 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.334811 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.334816 | orchestrator | + image_name = (known after apply) 2026-04-05 01:47:11.334821 | orchestrator | + key_pair = "testbed" 2026-04-05 01:47:11.334825 | orchestrator | + name = "testbed-manager" 2026-04-05 01:47:11.334830 | orchestrator | + power_state = "active" 2026-04-05 01:47:11.334835 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.334839 | orchestrator | + security_groups = (known after apply) 2026-04-05 01:47:11.334844 | orchestrator | + stop_before_destroy = false 2026-04-05 01:47:11.334848 | orchestrator | + updated = (known after apply) 2026-04-05 01:47:11.334853 | orchestrator | + user_data = (sensitive value) 2026-04-05 01:47:11.334857 | orchestrator | 2026-04-05 01:47:11.334862 | orchestrator | + block_device { 2026-04-05 01:47:11.334867 | orchestrator | + boot_index = 0 2026-04-05 01:47:11.334872 | orchestrator | + delete_on_termination = false 2026-04-05 01:47:11.334876 | orchestrator | + destination_type = "volume" 2026-04-05 01:47:11.334881 | orchestrator | + multiattach = false 2026-04-05 01:47:11.334885 | orchestrator | + source_type = "volume" 2026-04-05 01:47:11.334913 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.334922 | orchestrator | } 2026-04-05 01:47:11.334927 | orchestrator | 2026-04-05 01:47:11.334932 | orchestrator | + network { 2026-04-05 01:47:11.334936 | orchestrator | + access_network = false 2026-04-05 01:47:11.334941 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 01:47:11.334945 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 01:47:11.334950 | orchestrator | + mac = (known after apply) 2026-04-05 01:47:11.334955 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.334959 | orchestrator | + port = (known after apply) 2026-04-05 01:47:11.334964 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.334969 | orchestrator | } 2026-04-05 01:47:11.334973 | orchestrator | } 2026-04-05 01:47:11.334978 | orchestrator | 2026-04-05 01:47:11.334982 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-05 01:47:11.334987 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 01:47:11.334992 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 01:47:11.334996 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 01:47:11.335001 | orchestrator | + all_metadata = (known after apply) 2026-04-05 01:47:11.335006 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.335010 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.335015 | orchestrator | + config_drive = true 2026-04-05 01:47:11.335019 | orchestrator | + created = (known after apply) 2026-04-05 01:47:11.335024 | orchestrator | + flavor_id = (known after apply) 2026-04-05 01:47:11.335028 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 01:47:11.335033 | orchestrator | + force_delete = false 2026-04-05 01:47:11.335038 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 01:47:11.335042 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.335047 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.335052 | orchestrator | + image_name = (known after apply) 2026-04-05 01:47:11.335056 | orchestrator | + key_pair = "testbed" 2026-04-05 01:47:11.335061 | orchestrator | + name = "testbed-node-0" 2026-04-05 01:47:11.335065 | orchestrator | + power_state = "active" 2026-04-05 01:47:11.335070 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.335075 | orchestrator | + security_groups = (known after apply) 2026-04-05 01:47:11.335079 | orchestrator | + stop_before_destroy = false 2026-04-05 01:47:11.335084 | orchestrator | + updated = (known after apply) 2026-04-05 01:47:11.335089 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 01:47:11.335093 | orchestrator | 2026-04-05 01:47:11.335098 | orchestrator | + block_device { 2026-04-05 01:47:11.335103 | orchestrator | + boot_index = 0 2026-04-05 01:47:11.335107 | orchestrator | + delete_on_termination = false 2026-04-05 01:47:11.335112 | orchestrator | + destination_type = "volume" 2026-04-05 01:47:11.335116 | orchestrator | + multiattach = false 2026-04-05 01:47:11.335121 | orchestrator | + source_type = "volume" 2026-04-05 01:47:11.335125 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335130 | orchestrator | } 2026-04-05 01:47:11.335135 | orchestrator | 2026-04-05 01:47:11.335139 | orchestrator | + network { 2026-04-05 01:47:11.335144 | orchestrator | + access_network = false 2026-04-05 01:47:11.335148 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 01:47:11.335153 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 01:47:11.335158 | orchestrator | + mac = (known after apply) 2026-04-05 01:47:11.335162 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.335167 | orchestrator | + port = (known after apply) 2026-04-05 01:47:11.335172 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335176 | orchestrator | } 2026-04-05 01:47:11.335181 | orchestrator | } 2026-04-05 01:47:11.335185 | orchestrator | 2026-04-05 01:47:11.335190 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-05 01:47:11.335195 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 01:47:11.335200 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 01:47:11.335207 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 01:47:11.335212 | orchestrator | + all_metadata = (known after apply) 2026-04-05 01:47:11.335217 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.335221 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.335226 | orchestrator | + config_drive = true 2026-04-05 01:47:11.335230 | orchestrator | + created = (known after apply) 2026-04-05 01:47:11.335235 | orchestrator | + flavor_id = (known after apply) 2026-04-05 01:47:11.335240 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 01:47:11.335244 | orchestrator | + force_delete = false 2026-04-05 01:47:11.335249 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 01:47:11.335254 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.335258 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.335263 | orchestrator | + image_name = (known after apply) 2026-04-05 01:47:11.335267 | orchestrator | + key_pair = "testbed" 2026-04-05 01:47:11.335272 | orchestrator | + name = "testbed-node-1" 2026-04-05 01:47:11.335277 | orchestrator | + power_state = "active" 2026-04-05 01:47:11.335281 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.335286 | orchestrator | + security_groups = (known after apply) 2026-04-05 01:47:11.335291 | orchestrator | + stop_before_destroy = false 2026-04-05 01:47:11.335295 | orchestrator | + updated = (known after apply) 2026-04-05 01:47:11.335303 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 01:47:11.335308 | orchestrator | 2026-04-05 01:47:11.335313 | orchestrator | + block_device { 2026-04-05 01:47:11.335317 | orchestrator | + boot_index = 0 2026-04-05 01:47:11.335322 | orchestrator | + delete_on_termination = false 2026-04-05 01:47:11.335327 | orchestrator | + destination_type = "volume" 2026-04-05 01:47:11.335331 | orchestrator | + multiattach = false 2026-04-05 01:47:11.335336 | orchestrator | + source_type = "volume" 2026-04-05 01:47:11.335340 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335345 | orchestrator | } 2026-04-05 01:47:11.335350 | orchestrator | 2026-04-05 01:47:11.335355 | orchestrator | + network { 2026-04-05 01:47:11.335359 | orchestrator | + access_network = false 2026-04-05 01:47:11.335364 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 01:47:11.335368 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 01:47:11.335373 | orchestrator | + mac = (known after apply) 2026-04-05 01:47:11.335378 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.335382 | orchestrator | + port = (known after apply) 2026-04-05 01:47:11.335387 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335392 | orchestrator | } 2026-04-05 01:47:11.335396 | orchestrator | } 2026-04-05 01:47:11.335404 | orchestrator | 2026-04-05 01:47:11.335408 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-05 01:47:11.335413 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 01:47:11.335418 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 01:47:11.335422 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 01:47:11.335427 | orchestrator | + all_metadata = (known after apply) 2026-04-05 01:47:11.335432 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.335437 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.335441 | orchestrator | + config_drive = true 2026-04-05 01:47:11.335446 | orchestrator | + created = (known after apply) 2026-04-05 01:47:11.335451 | orchestrator | + flavor_id = (known after apply) 2026-04-05 01:47:11.335455 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 01:47:11.335460 | orchestrator | + force_delete = false 2026-04-05 01:47:11.335464 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 01:47:11.335469 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.335474 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.335482 | orchestrator | + image_name = (known after apply) 2026-04-05 01:47:11.335486 | orchestrator | + key_pair = "testbed" 2026-04-05 01:47:11.335491 | orchestrator | + name = "testbed-node-2" 2026-04-05 01:47:11.335495 | orchestrator | + power_state = "active" 2026-04-05 01:47:11.335500 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.335505 | orchestrator | + security_groups = (known after apply) 2026-04-05 01:47:11.335509 | orchestrator | + stop_before_destroy = false 2026-04-05 01:47:11.335514 | orchestrator | + updated = (known after apply) 2026-04-05 01:47:11.335518 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 01:47:11.335523 | orchestrator | 2026-04-05 01:47:11.335528 | orchestrator | + block_device { 2026-04-05 01:47:11.335533 | orchestrator | + boot_index = 0 2026-04-05 01:47:11.335537 | orchestrator | + delete_on_termination = false 2026-04-05 01:47:11.335542 | orchestrator | + destination_type = "volume" 2026-04-05 01:47:11.335546 | orchestrator | + multiattach = false 2026-04-05 01:47:11.335551 | orchestrator | + source_type = "volume" 2026-04-05 01:47:11.335555 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335560 | orchestrator | } 2026-04-05 01:47:11.335565 | orchestrator | 2026-04-05 01:47:11.335569 | orchestrator | + network { 2026-04-05 01:47:11.335574 | orchestrator | + access_network = false 2026-04-05 01:47:11.335578 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 01:47:11.335583 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 01:47:11.335587 | orchestrator | + mac = (known after apply) 2026-04-05 01:47:11.335592 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.335597 | orchestrator | + port = (known after apply) 2026-04-05 01:47:11.335601 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335606 | orchestrator | } 2026-04-05 01:47:11.335611 | orchestrator | } 2026-04-05 01:47:11.335615 | orchestrator | 2026-04-05 01:47:11.335623 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-05 01:47:11.335628 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 01:47:11.335633 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 01:47:11.335637 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 01:47:11.335642 | orchestrator | + all_metadata = (known after apply) 2026-04-05 01:47:11.335647 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.335651 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.335656 | orchestrator | + config_drive = true 2026-04-05 01:47:11.335661 | orchestrator | + created = (known after apply) 2026-04-05 01:47:11.335665 | orchestrator | + flavor_id = (known after apply) 2026-04-05 01:47:11.335670 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 01:47:11.335674 | orchestrator | + force_delete = false 2026-04-05 01:47:11.335679 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 01:47:11.335684 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.335688 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.335693 | orchestrator | + image_name = (known after apply) 2026-04-05 01:47:11.335698 | orchestrator | + key_pair = "testbed" 2026-04-05 01:47:11.335702 | orchestrator | + name = "testbed-node-3" 2026-04-05 01:47:11.335707 | orchestrator | + power_state = "active" 2026-04-05 01:47:11.335712 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.335716 | orchestrator | + security_groups = (known after apply) 2026-04-05 01:47:11.335721 | orchestrator | + stop_before_destroy = false 2026-04-05 01:47:11.335726 | orchestrator | + updated = (known after apply) 2026-04-05 01:47:11.335730 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 01:47:11.335735 | orchestrator | 2026-04-05 01:47:11.335740 | orchestrator | + block_device { 2026-04-05 01:47:11.335744 | orchestrator | + boot_index = 0 2026-04-05 01:47:11.335749 | orchestrator | + delete_on_termination = false 2026-04-05 01:47:11.335754 | orchestrator | + destination_type = "volume" 2026-04-05 01:47:11.335762 | orchestrator | + multiattach = false 2026-04-05 01:47:11.335766 | orchestrator | + source_type = "volume" 2026-04-05 01:47:11.335771 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335776 | orchestrator | } 2026-04-05 01:47:11.335781 | orchestrator | 2026-04-05 01:47:11.335785 | orchestrator | + network { 2026-04-05 01:47:11.335790 | orchestrator | + access_network = false 2026-04-05 01:47:11.335795 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 01:47:11.335799 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 01:47:11.335804 | orchestrator | + mac = (known after apply) 2026-04-05 01:47:11.335809 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.335813 | orchestrator | + port = (known after apply) 2026-04-05 01:47:11.335819 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.335823 | orchestrator | } 2026-04-05 01:47:11.335828 | orchestrator | } 2026-04-05 01:47:11.335835 | orchestrator | 2026-04-05 01:47:11.335840 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-05 01:47:11.335845 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 01:47:11.335850 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 01:47:11.335854 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 01:47:11.335859 | orchestrator | + all_metadata = (known after apply) 2026-04-05 01:47:11.335864 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.335868 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.335873 | orchestrator | + config_drive = true 2026-04-05 01:47:11.335877 | orchestrator | + created = (known after apply) 2026-04-05 01:47:11.335882 | orchestrator | + flavor_id = (known after apply) 2026-04-05 01:47:11.335887 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 01:47:11.335891 | orchestrator | + force_delete = false 2026-04-05 01:47:11.335946 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 01:47:11.335955 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.335960 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.335964 | orchestrator | + image_name = (known after apply) 2026-04-05 01:47:11.335969 | orchestrator | + key_pair = "testbed" 2026-04-05 01:47:11.335974 | orchestrator | + name = "testbed-node-4" 2026-04-05 01:47:11.335978 | orchestrator | + power_state = "active" 2026-04-05 01:47:11.335983 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.335987 | orchestrator | + security_groups = (known after apply) 2026-04-05 01:47:11.335992 | orchestrator | + stop_before_destroy = false 2026-04-05 01:47:11.335997 | orchestrator | + updated = (known after apply) 2026-04-05 01:47:11.336002 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 01:47:11.336007 | orchestrator | 2026-04-05 01:47:11.336011 | orchestrator | + block_device { 2026-04-05 01:47:11.336016 | orchestrator | + boot_index = 0 2026-04-05 01:47:11.336021 | orchestrator | + delete_on_termination = false 2026-04-05 01:47:11.336025 | orchestrator | + destination_type = "volume" 2026-04-05 01:47:11.336030 | orchestrator | + multiattach = false 2026-04-05 01:47:11.336035 | orchestrator | + source_type = "volume" 2026-04-05 01:47:11.336039 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.336044 | orchestrator | } 2026-04-05 01:47:11.336049 | orchestrator | 2026-04-05 01:47:11.336053 | orchestrator | + network { 2026-04-05 01:47:11.336058 | orchestrator | + access_network = false 2026-04-05 01:47:11.336062 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 01:47:11.336067 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 01:47:11.336072 | orchestrator | + mac = (known after apply) 2026-04-05 01:47:11.336077 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.336081 | orchestrator | + port = (known after apply) 2026-04-05 01:47:11.336086 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.336091 | orchestrator | } 2026-04-05 01:47:11.336095 | orchestrator | } 2026-04-05 01:47:11.336105 | orchestrator | 2026-04-05 01:47:11.336110 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-05 01:47:11.336115 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-05 01:47:11.336120 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-05 01:47:11.336125 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-05 01:47:11.336129 | orchestrator | + all_metadata = (known after apply) 2026-04-05 01:47:11.336134 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.336139 | orchestrator | + availability_zone = "nova" 2026-04-05 01:47:11.336143 | orchestrator | + config_drive = true 2026-04-05 01:47:11.336148 | orchestrator | + created = (known after apply) 2026-04-05 01:47:11.336152 | orchestrator | + flavor_id = (known after apply) 2026-04-05 01:47:11.336157 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-05 01:47:11.336162 | orchestrator | + force_delete = false 2026-04-05 01:47:11.336166 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-05 01:47:11.336171 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336176 | orchestrator | + image_id = (known after apply) 2026-04-05 01:47:11.336180 | orchestrator | + image_name = (known after apply) 2026-04-05 01:47:11.336185 | orchestrator | + key_pair = "testbed" 2026-04-05 01:47:11.336190 | orchestrator | + name = "testbed-node-5" 2026-04-05 01:47:11.336195 | orchestrator | + power_state = "active" 2026-04-05 01:47:11.336199 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336204 | orchestrator | + security_groups = (known after apply) 2026-04-05 01:47:11.336208 | orchestrator | + stop_before_destroy = false 2026-04-05 01:47:11.336212 | orchestrator | + updated = (known after apply) 2026-04-05 01:47:11.336217 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-05 01:47:11.336221 | orchestrator | 2026-04-05 01:47:11.336225 | orchestrator | + block_device { 2026-04-05 01:47:11.336229 | orchestrator | + boot_index = 0 2026-04-05 01:47:11.336234 | orchestrator | + delete_on_termination = false 2026-04-05 01:47:11.336238 | orchestrator | + destination_type = "volume" 2026-04-05 01:47:11.336242 | orchestrator | + multiattach = false 2026-04-05 01:47:11.336246 | orchestrator | + source_type = "volume" 2026-04-05 01:47:11.336250 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.336254 | orchestrator | } 2026-04-05 01:47:11.336259 | orchestrator | 2026-04-05 01:47:11.336263 | orchestrator | + network { 2026-04-05 01:47:11.336267 | orchestrator | + access_network = false 2026-04-05 01:47:11.336271 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-05 01:47:11.336276 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-05 01:47:11.336280 | orchestrator | + mac = (known after apply) 2026-04-05 01:47:11.336284 | orchestrator | + name = (known after apply) 2026-04-05 01:47:11.336288 | orchestrator | + port = (known after apply) 2026-04-05 01:47:11.336292 | orchestrator | + uuid = (known after apply) 2026-04-05 01:47:11.336297 | orchestrator | } 2026-04-05 01:47:11.336301 | orchestrator | } 2026-04-05 01:47:11.336305 | orchestrator | 2026-04-05 01:47:11.336309 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-05 01:47:11.336314 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-05 01:47:11.336318 | orchestrator | + fingerprint = (known after apply) 2026-04-05 01:47:11.336322 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336326 | orchestrator | + name = "testbed" 2026-04-05 01:47:11.336331 | orchestrator | + private_key = (sensitive value) 2026-04-05 01:47:11.336335 | orchestrator | + public_key = (known after apply) 2026-04-05 01:47:11.336339 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336343 | orchestrator | + user_id = (known after apply) 2026-04-05 01:47:11.336347 | orchestrator | } 2026-04-05 01:47:11.336353 | orchestrator | 2026-04-05 01:47:11.336363 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-05 01:47:11.336370 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336383 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336391 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336398 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336404 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336415 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336420 | orchestrator | } 2026-04-05 01:47:11.336424 | orchestrator | 2026-04-05 01:47:11.336428 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-05 01:47:11.336432 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336437 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336441 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336445 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336449 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336453 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336457 | orchestrator | } 2026-04-05 01:47:11.336461 | orchestrator | 2026-04-05 01:47:11.336465 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-05 01:47:11.336470 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336474 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336478 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336482 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336486 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336490 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336495 | orchestrator | } 2026-04-05 01:47:11.336499 | orchestrator | 2026-04-05 01:47:11.336503 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-05 01:47:11.336507 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336511 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336515 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336520 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336524 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336528 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336532 | orchestrator | } 2026-04-05 01:47:11.336536 | orchestrator | 2026-04-05 01:47:11.336541 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-05 01:47:11.336545 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336549 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336554 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336558 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336562 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336566 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336570 | orchestrator | } 2026-04-05 01:47:11.336575 | orchestrator | 2026-04-05 01:47:11.336579 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-05 01:47:11.336583 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336588 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336592 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336596 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336600 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336604 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336608 | orchestrator | } 2026-04-05 01:47:11.336613 | orchestrator | 2026-04-05 01:47:11.336617 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-05 01:47:11.336621 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336625 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336630 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336634 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336638 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336645 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336650 | orchestrator | } 2026-04-05 01:47:11.336654 | orchestrator | 2026-04-05 01:47:11.336658 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-05 01:47:11.336662 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336666 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336671 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336675 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336679 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336683 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336687 | orchestrator | } 2026-04-05 01:47:11.336692 | orchestrator | 2026-04-05 01:47:11.336696 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-05 01:47:11.336700 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-05 01:47:11.336704 | orchestrator | + device = (known after apply) 2026-04-05 01:47:11.336708 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336712 | orchestrator | + instance_id = (known after apply) 2026-04-05 01:47:11.336717 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336721 | orchestrator | + volume_id = (known after apply) 2026-04-05 01:47:11.336725 | orchestrator | } 2026-04-05 01:47:11.336729 | orchestrator | 2026-04-05 01:47:11.336733 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-05 01:47:11.336739 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-05 01:47:11.336743 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 01:47:11.336747 | orchestrator | + floating_ip = (known after apply) 2026-04-05 01:47:11.336751 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336755 | orchestrator | + port_id = (known after apply) 2026-04-05 01:47:11.336760 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336764 | orchestrator | } 2026-04-05 01:47:11.336768 | orchestrator | 2026-04-05 01:47:11.336772 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-05 01:47:11.336776 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-05 01:47:11.336781 | orchestrator | + address = (known after apply) 2026-04-05 01:47:11.336785 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.336792 | orchestrator | + dns_domain = (known after apply) 2026-04-05 01:47:11.336796 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.336803 | orchestrator | + fixed_ip = (known after apply) 2026-04-05 01:47:11.336807 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336812 | orchestrator | + pool = "public" 2026-04-05 01:47:11.336816 | orchestrator | + port_id = (known after apply) 2026-04-05 01:47:11.336820 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336824 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.336828 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.336832 | orchestrator | } 2026-04-05 01:47:11.336837 | orchestrator | 2026-04-05 01:47:11.336841 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-05 01:47:11.336845 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-05 01:47:11.336849 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.336853 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.336858 | orchestrator | + availability_zone_hints = [ 2026-04-05 01:47:11.336862 | orchestrator | + "nova", 2026-04-05 01:47:11.336866 | orchestrator | ] 2026-04-05 01:47:11.336870 | orchestrator | + dns_domain = (known after apply) 2026-04-05 01:47:11.336875 | orchestrator | + external = (known after apply) 2026-04-05 01:47:11.336879 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.336883 | orchestrator | + mtu = (known after apply) 2026-04-05 01:47:11.336887 | orchestrator | + name = "net-testbed-management" 2026-04-05 01:47:11.336912 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.336922 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.336926 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.336930 | orchestrator | + shared = (known after apply) 2026-04-05 01:47:11.336935 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.336939 | orchestrator | + transparent_vlan = (known after apply) 2026-04-05 01:47:11.336943 | orchestrator | 2026-04-05 01:47:11.336948 | orchestrator | + segments (known after apply) 2026-04-05 01:47:11.336952 | orchestrator | } 2026-04-05 01:47:11.336956 | orchestrator | 2026-04-05 01:47:11.336960 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-05 01:47:11.336964 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-05 01:47:11.336968 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.336972 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 01:47:11.336976 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 01:47:11.336981 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.336985 | orchestrator | + device_id = (known after apply) 2026-04-05 01:47:11.336989 | orchestrator | + device_owner = (known after apply) 2026-04-05 01:47:11.336993 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 01:47:11.336998 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.337002 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.337006 | orchestrator | + mac_address = (known after apply) 2026-04-05 01:47:11.337010 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.337014 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.337018 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.337022 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.337027 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 01:47:11.337031 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.337035 | orchestrator | 2026-04-05 01:47:11.337039 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337044 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 01:47:11.337048 | orchestrator | } 2026-04-05 01:47:11.337052 | orchestrator | 2026-04-05 01:47:11.337057 | orchestrator | + binding (known after apply) 2026-04-05 01:47:11.337061 | orchestrator | 2026-04-05 01:47:11.337065 | orchestrator | + fixed_ip { 2026-04-05 01:47:11.337069 | orchestrator | + ip_address = "192.168.16.5" 2026-04-05 01:47:11.337074 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.337078 | orchestrator | } 2026-04-05 01:47:11.337082 | orchestrator | } 2026-04-05 01:47:11.337086 | orchestrator | 2026-04-05 01:47:11.337091 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-05 01:47:11.337095 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 01:47:11.337099 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.337104 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 01:47:11.337108 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 01:47:11.337112 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.337117 | orchestrator | + device_id = (known after apply) 2026-04-05 01:47:11.337121 | orchestrator | + device_owner = (known after apply) 2026-04-05 01:47:11.337125 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 01:47:11.337129 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.337133 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.337137 | orchestrator | + mac_address = (known after apply) 2026-04-05 01:47:11.337141 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.337146 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.337150 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.337154 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.337162 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 01:47:11.337166 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.337170 | orchestrator | 2026-04-05 01:47:11.337174 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337179 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 01:47:11.337183 | orchestrator | } 2026-04-05 01:47:11.337188 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337192 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 01:47:11.337196 | orchestrator | } 2026-04-05 01:47:11.337200 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337204 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 01:47:11.337209 | orchestrator | } 2026-04-05 01:47:11.337213 | orchestrator | 2026-04-05 01:47:11.337217 | orchestrator | + binding (known after apply) 2026-04-05 01:47:11.337221 | orchestrator | 2026-04-05 01:47:11.337226 | orchestrator | + fixed_ip { 2026-04-05 01:47:11.337230 | orchestrator | + ip_address = "192.168.16.10" 2026-04-05 01:47:11.337234 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.337238 | orchestrator | } 2026-04-05 01:47:11.337243 | orchestrator | } 2026-04-05 01:47:11.337247 | orchestrator | 2026-04-05 01:47:11.337251 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-05 01:47:11.337255 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 01:47:11.337263 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.337268 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 01:47:11.337277 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 01:47:11.337281 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.337285 | orchestrator | + device_id = (known after apply) 2026-04-05 01:47:11.337290 | orchestrator | + device_owner = (known after apply) 2026-04-05 01:47:11.337294 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 01:47:11.337298 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.337302 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.337306 | orchestrator | + mac_address = (known after apply) 2026-04-05 01:47:11.337311 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.337315 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.337319 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.337323 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.337328 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 01:47:11.337332 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.337336 | orchestrator | 2026-04-05 01:47:11.337340 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337344 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 01:47:11.337348 | orchestrator | } 2026-04-05 01:47:11.337353 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337357 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 01:47:11.337361 | orchestrator | } 2026-04-05 01:47:11.337366 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337370 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 01:47:11.337374 | orchestrator | } 2026-04-05 01:47:11.337378 | orchestrator | 2026-04-05 01:47:11.337383 | orchestrator | + binding (known after apply) 2026-04-05 01:47:11.337387 | orchestrator | 2026-04-05 01:47:11.337391 | orchestrator | + fixed_ip { 2026-04-05 01:47:11.337396 | orchestrator | + ip_address = "192.168.16.11" 2026-04-05 01:47:11.337400 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.337404 | orchestrator | } 2026-04-05 01:47:11.337408 | orchestrator | } 2026-04-05 01:47:11.337412 | orchestrator | 2026-04-05 01:47:11.337417 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-05 01:47:11.337421 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 01:47:11.337425 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.337429 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 01:47:11.337434 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 01:47:11.337438 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.337445 | orchestrator | + device_id = (known after apply) 2026-04-05 01:47:11.337450 | orchestrator | + device_owner = (known after apply) 2026-04-05 01:47:11.337454 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 01:47:11.337458 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.337462 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.337466 | orchestrator | + mac_address = (known after apply) 2026-04-05 01:47:11.337471 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.337475 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.337479 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.337483 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.337487 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 01:47:11.337492 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.337496 | orchestrator | 2026-04-05 01:47:11.337500 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337505 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 01:47:11.337509 | orchestrator | } 2026-04-05 01:47:11.337524 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337528 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 01:47:11.337532 | orchestrator | } 2026-04-05 01:47:11.337537 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337541 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 01:47:11.337545 | orchestrator | } 2026-04-05 01:47:11.337549 | orchestrator | 2026-04-05 01:47:11.337554 | orchestrator | + binding (known after apply) 2026-04-05 01:47:11.337558 | orchestrator | 2026-04-05 01:47:11.337562 | orchestrator | + fixed_ip { 2026-04-05 01:47:11.337566 | orchestrator | + ip_address = "192.168.16.12" 2026-04-05 01:47:11.337571 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.337575 | orchestrator | } 2026-04-05 01:47:11.337579 | orchestrator | } 2026-04-05 01:47:11.337584 | orchestrator | 2026-04-05 01:47:11.337588 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-05 01:47:11.337592 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 01:47:11.337596 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.337601 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 01:47:11.337605 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 01:47:11.337609 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.337614 | orchestrator | + device_id = (known after apply) 2026-04-05 01:47:11.337618 | orchestrator | + device_owner = (known after apply) 2026-04-05 01:47:11.337622 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 01:47:11.337626 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.337630 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.337634 | orchestrator | + mac_address = (known after apply) 2026-04-05 01:47:11.337639 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.337643 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.337647 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.337651 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.337656 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 01:47:11.337660 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.337664 | orchestrator | 2026-04-05 01:47:11.337668 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337673 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 01:47:11.337677 | orchestrator | } 2026-04-05 01:47:11.337681 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337685 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 01:47:11.337690 | orchestrator | } 2026-04-05 01:47:11.337694 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337698 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 01:47:11.337702 | orchestrator | } 2026-04-05 01:47:11.337707 | orchestrator | 2026-04-05 01:47:11.337714 | orchestrator | + binding (known after apply) 2026-04-05 01:47:11.337718 | orchestrator | 2026-04-05 01:47:11.337722 | orchestrator | + fixed_ip { 2026-04-05 01:47:11.337727 | orchestrator | + ip_address = "192.168.16.13" 2026-04-05 01:47:11.337731 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.337735 | orchestrator | } 2026-04-05 01:47:11.337739 | orchestrator | } 2026-04-05 01:47:11.337743 | orchestrator | 2026-04-05 01:47:11.337748 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-05 01:47:11.337755 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 01:47:11.337760 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.337764 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 01:47:11.337768 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 01:47:11.337773 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.337777 | orchestrator | + device_id = (known after apply) 2026-04-05 01:47:11.337781 | orchestrator | + device_owner = (known after apply) 2026-04-05 01:47:11.337787 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 01:47:11.337794 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.337803 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.337815 | orchestrator | + mac_address = (known after apply) 2026-04-05 01:47:11.337826 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.337832 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.337839 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.337845 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.337852 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 01:47:11.337859 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.337866 | orchestrator | 2026-04-05 01:47:11.337873 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337884 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 01:47:11.337891 | orchestrator | } 2026-04-05 01:47:11.337913 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337917 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 01:47:11.337921 | orchestrator | } 2026-04-05 01:47:11.337925 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.337929 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 01:47:11.337934 | orchestrator | } 2026-04-05 01:47:11.337938 | orchestrator | 2026-04-05 01:47:11.337942 | orchestrator | + binding (known after apply) 2026-04-05 01:47:11.337946 | orchestrator | 2026-04-05 01:47:11.337950 | orchestrator | + fixed_ip { 2026-04-05 01:47:11.337955 | orchestrator | + ip_address = "192.168.16.14" 2026-04-05 01:47:11.337959 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.337963 | orchestrator | } 2026-04-05 01:47:11.337967 | orchestrator | } 2026-04-05 01:47:11.337971 | orchestrator | 2026-04-05 01:47:11.337975 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-05 01:47:11.337980 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-05 01:47:11.337984 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.337988 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-05 01:47:11.337993 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-05 01:47:11.337997 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.338001 | orchestrator | + device_id = (known after apply) 2026-04-05 01:47:11.338005 | orchestrator | + device_owner = (known after apply) 2026-04-05 01:47:11.338009 | orchestrator | + dns_assignment = (known after apply) 2026-04-05 01:47:11.338035 | orchestrator | + dns_name = (known after apply) 2026-04-05 01:47:11.338042 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338049 | orchestrator | + mac_address = (known after apply) 2026-04-05 01:47:11.338055 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.338063 | orchestrator | + port_security_enabled = (known after apply) 2026-04-05 01:47:11.338076 | orchestrator | + qos_policy_id = (known after apply) 2026-04-05 01:47:11.338089 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338096 | orchestrator | + security_group_ids = (known after apply) 2026-04-05 01:47:11.338103 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338110 | orchestrator | 2026-04-05 01:47:11.338116 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.338122 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-05 01:47:11.338129 | orchestrator | } 2026-04-05 01:47:11.338136 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.338142 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-05 01:47:11.338150 | orchestrator | } 2026-04-05 01:47:11.338156 | orchestrator | + allowed_address_pairs { 2026-04-05 01:47:11.338163 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-05 01:47:11.338170 | orchestrator | } 2026-04-05 01:47:11.338177 | orchestrator | 2026-04-05 01:47:11.338183 | orchestrator | + binding (known after apply) 2026-04-05 01:47:11.338190 | orchestrator | 2026-04-05 01:47:11.338196 | orchestrator | + fixed_ip { 2026-04-05 01:47:11.338203 | orchestrator | + ip_address = "192.168.16.15" 2026-04-05 01:47:11.338210 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.338217 | orchestrator | } 2026-04-05 01:47:11.338223 | orchestrator | } 2026-04-05 01:47:11.338232 | orchestrator | 2026-04-05 01:47:11.338237 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-05 01:47:11.338242 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-05 01:47:11.338246 | orchestrator | + force_destroy = false 2026-04-05 01:47:11.338250 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338255 | orchestrator | + port_id = (known after apply) 2026-04-05 01:47:11.338259 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338263 | orchestrator | + router_id = (known after apply) 2026-04-05 01:47:11.338267 | orchestrator | + subnet_id = (known after apply) 2026-04-05 01:47:11.338271 | orchestrator | } 2026-04-05 01:47:11.338275 | orchestrator | 2026-04-05 01:47:11.338280 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-05 01:47:11.338284 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-05 01:47:11.338288 | orchestrator | + admin_state_up = (known after apply) 2026-04-05 01:47:11.338292 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.338297 | orchestrator | + availability_zone_hints = [ 2026-04-05 01:47:11.338301 | orchestrator | + "nova", 2026-04-05 01:47:11.338305 | orchestrator | ] 2026-04-05 01:47:11.338309 | orchestrator | + distributed = (known after apply) 2026-04-05 01:47:11.338313 | orchestrator | + enable_snat = (known after apply) 2026-04-05 01:47:11.338318 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-05 01:47:11.338322 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-05 01:47:11.338326 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338331 | orchestrator | + name = "testbed" 2026-04-05 01:47:11.338335 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338339 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338344 | orchestrator | 2026-04-05 01:47:11.338348 | orchestrator | + external_fixed_ip (known after apply) 2026-04-05 01:47:11.338352 | orchestrator | } 2026-04-05 01:47:11.338356 | orchestrator | 2026-04-05 01:47:11.338360 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-05 01:47:11.338365 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-05 01:47:11.338376 | orchestrator | + description = "ssh" 2026-04-05 01:47:11.338380 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338385 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338389 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338394 | orchestrator | + port_range_max = 22 2026-04-05 01:47:11.338398 | orchestrator | + port_range_min = 22 2026-04-05 01:47:11.338402 | orchestrator | + protocol = "tcp" 2026-04-05 01:47:11.338406 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338415 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338420 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338424 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 01:47:11.338428 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338432 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338436 | orchestrator | } 2026-04-05 01:47:11.338441 | orchestrator | 2026-04-05 01:47:11.338445 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-05 01:47:11.338449 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-05 01:47:11.338454 | orchestrator | + description = "wireguard" 2026-04-05 01:47:11.338458 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338462 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338466 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338471 | orchestrator | + port_range_max = 51820 2026-04-05 01:47:11.338475 | orchestrator | + port_range_min = 51820 2026-04-05 01:47:11.338479 | orchestrator | + protocol = "udp" 2026-04-05 01:47:11.338483 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338487 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338492 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338496 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 01:47:11.338500 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338504 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338509 | orchestrator | } 2026-04-05 01:47:11.338513 | orchestrator | 2026-04-05 01:47:11.338517 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-05 01:47:11.338522 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-05 01:47:11.338529 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338534 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338538 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338542 | orchestrator | + protocol = "tcp" 2026-04-05 01:47:11.338546 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338550 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338554 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338559 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 01:47:11.338563 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338567 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338571 | orchestrator | } 2026-04-05 01:47:11.338575 | orchestrator | 2026-04-05 01:47:11.338580 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-05 01:47:11.338584 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-05 01:47:11.338588 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338592 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338596 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338601 | orchestrator | + protocol = "udp" 2026-04-05 01:47:11.338605 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338609 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338613 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338617 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-05 01:47:11.338622 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338626 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338630 | orchestrator | } 2026-04-05 01:47:11.338634 | orchestrator | 2026-04-05 01:47:11.338639 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-05 01:47:11.338647 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-05 01:47:11.338652 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338656 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338660 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338665 | orchestrator | + protocol = "icmp" 2026-04-05 01:47:11.338669 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338673 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338678 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338682 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 01:47:11.338686 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338690 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338694 | orchestrator | } 2026-04-05 01:47:11.338699 | orchestrator | 2026-04-05 01:47:11.338703 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-05 01:47:11.338707 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-05 01:47:11.338711 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338716 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338720 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338724 | orchestrator | + protocol = "tcp" 2026-04-05 01:47:11.338728 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338732 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338736 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338740 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 01:47:11.338745 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338752 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338757 | orchestrator | } 2026-04-05 01:47:11.338761 | orchestrator | 2026-04-05 01:47:11.338766 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-05 01:47:11.338770 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-05 01:47:11.338774 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338779 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338783 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338787 | orchestrator | + protocol = "udp" 2026-04-05 01:47:11.338791 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338795 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338800 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338804 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 01:47:11.338808 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338812 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338816 | orchestrator | } 2026-04-05 01:47:11.338820 | orchestrator | 2026-04-05 01:47:11.338824 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-05 01:47:11.338829 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-05 01:47:11.338833 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338837 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338841 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338846 | orchestrator | + protocol = "icmp" 2026-04-05 01:47:11.338850 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338854 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338858 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338862 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 01:47:11.338866 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338871 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338881 | orchestrator | } 2026-04-05 01:47:11.338886 | orchestrator | 2026-04-05 01:47:11.338890 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-05 01:47:11.338931 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-05 01:47:11.338936 | orchestrator | + description = "vrrp" 2026-04-05 01:47:11.338941 | orchestrator | + direction = "ingress" 2026-04-05 01:47:11.338945 | orchestrator | + ethertype = "IPv4" 2026-04-05 01:47:11.338949 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.338953 | orchestrator | + protocol = "112" 2026-04-05 01:47:11.338957 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.338962 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-05 01:47:11.338966 | orchestrator | + remote_group_id = (known after apply) 2026-04-05 01:47:11.338970 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-05 01:47:11.338974 | orchestrator | + security_group_id = (known after apply) 2026-04-05 01:47:11.338978 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.338983 | orchestrator | } 2026-04-05 01:47:11.338987 | orchestrator | 2026-04-05 01:47:11.338991 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-05 01:47:11.338996 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-05 01:47:11.339000 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.339004 | orchestrator | + description = "management security group" 2026-04-05 01:47:11.339009 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.339013 | orchestrator | + name = "testbed-management" 2026-04-05 01:47:11.339017 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.339021 | orchestrator | + stateful = (known after apply) 2026-04-05 01:47:11.339025 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.339029 | orchestrator | } 2026-04-05 01:47:11.339033 | orchestrator | 2026-04-05 01:47:11.339037 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-05 01:47:11.339042 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-05 01:47:11.339046 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.339050 | orchestrator | + description = "node security group" 2026-04-05 01:47:11.339054 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.339058 | orchestrator | + name = "testbed-node" 2026-04-05 01:47:11.339063 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.339067 | orchestrator | + stateful = (known after apply) 2026-04-05 01:47:11.339071 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.339075 | orchestrator | } 2026-04-05 01:47:11.339079 | orchestrator | 2026-04-05 01:47:11.339084 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-05 01:47:11.339092 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-05 01:47:11.339099 | orchestrator | + all_tags = (known after apply) 2026-04-05 01:47:11.339106 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-05 01:47:11.339113 | orchestrator | + dns_nameservers = [ 2026-04-05 01:47:11.339121 | orchestrator | + "8.8.8.8", 2026-04-05 01:47:11.339128 | orchestrator | + "9.9.9.9", 2026-04-05 01:47:11.339135 | orchestrator | ] 2026-04-05 01:47:11.339142 | orchestrator | + enable_dhcp = true 2026-04-05 01:47:11.339149 | orchestrator | + gateway_ip = (known after apply) 2026-04-05 01:47:11.339160 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.339167 | orchestrator | + ip_version = 4 2026-04-05 01:47:11.339174 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-05 01:47:11.339181 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-05 01:47:11.339188 | orchestrator | + name = "subnet-testbed-management" 2026-04-05 01:47:11.339194 | orchestrator | + network_id = (known after apply) 2026-04-05 01:47:11.339201 | orchestrator | + no_gateway = false 2026-04-05 01:47:11.339208 | orchestrator | + region = (known after apply) 2026-04-05 01:47:11.339215 | orchestrator | + service_types = (known after apply) 2026-04-05 01:47:11.339227 | orchestrator | + tenant_id = (known after apply) 2026-04-05 01:47:11.339234 | orchestrator | 2026-04-05 01:47:11.339241 | orchestrator | + allocation_pool { 2026-04-05 01:47:11.339247 | orchestrator | + end = "192.168.31.250" 2026-04-05 01:47:11.339251 | orchestrator | + start = "192.168.31.200" 2026-04-05 01:47:11.339255 | orchestrator | } 2026-04-05 01:47:11.339259 | orchestrator | } 2026-04-05 01:47:11.339263 | orchestrator | 2026-04-05 01:47:11.339267 | orchestrator | # terraform_data.image will be created 2026-04-05 01:47:11.339270 | orchestrator | + resource "terraform_data" "image" { 2026-04-05 01:47:11.339274 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.339283 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 01:47:11.339287 | orchestrator | + output = (known after apply) 2026-04-05 01:47:11.339291 | orchestrator | } 2026-04-05 01:47:11.339295 | orchestrator | 2026-04-05 01:47:11.339298 | orchestrator | # terraform_data.image_node will be created 2026-04-05 01:47:11.339302 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-05 01:47:11.339306 | orchestrator | + id = (known after apply) 2026-04-05 01:47:11.339310 | orchestrator | + input = "Ubuntu 24.04" 2026-04-05 01:47:11.339314 | orchestrator | + output = (known after apply) 2026-04-05 01:47:11.339317 | orchestrator | } 2026-04-05 01:47:11.339321 | orchestrator | 2026-04-05 01:47:11.339325 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-05 01:47:11.339329 | orchestrator | 2026-04-05 01:47:11.339333 | orchestrator | Changes to Outputs: 2026-04-05 01:47:11.339336 | orchestrator | + manager_address = (sensitive value) 2026-04-05 01:47:11.339340 | orchestrator | + private_key = (sensitive value) 2026-04-05 01:47:11.520825 | orchestrator | terraform_data.image: Creating... 2026-04-05 01:47:11.521297 | orchestrator | terraform_data.image: Creation complete after 0s [id=910246ac-f947-d1f4-1ce7-fc9e96421838] 2026-04-05 01:47:11.575501 | orchestrator | terraform_data.image_node: Creating... 2026-04-05 01:47:11.575828 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=bd833f0e-80b2-723b-04b7-eed82eda9048] 2026-04-05 01:47:11.594966 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-05 01:47:11.595322 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-05 01:47:11.605324 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-05 01:47:11.606035 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-05 01:47:11.608851 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-05 01:47:11.609464 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-05 01:47:11.610576 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-05 01:47:11.627089 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-05 01:47:11.629871 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-05 01:47:11.632693 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-05 01:47:12.104502 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-05 01:47:12.110617 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-05 01:47:12.133828 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 01:47:12.139979 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-05 01:47:12.411313 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-05 01:47:12.416285 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-05 01:47:12.728473 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=27fbc229-5958-4bc7-8c2a-55723663ece7] 2026-04-05 01:47:12.746728 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-05 01:47:12.752805 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=78f3fb6376f08d00309df6059f1f6cf716c753d5] 2026-04-05 01:47:12.762979 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-05 01:47:12.767845 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=90f1720d9f1dce6c930d4fed8be07396fb02825c] 2026-04-05 01:47:12.773994 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-05 01:47:15.266811 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=c4b125a1-49de-45bb-8abb-de12a0ea86b6] 2026-04-05 01:47:15.273008 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-05 01:47:15.280542 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=19b95bad-a78c-4860-8023-fde2f6985c3d] 2026-04-05 01:47:15.281066 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=e411545b-3ce6-4571-8392-eb6cf6edb95c] 2026-04-05 01:47:15.292535 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-05 01:47:15.292884 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-05 01:47:15.296444 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=ff9c3d73-f1cc-45bb-b790-5886e9656564] 2026-04-05 01:47:15.308197 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-05 01:47:15.314987 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=ff5ba5b2-ecfa-45a1-89e6-23476a027e2c] 2026-04-05 01:47:15.321636 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=0b219c4b-918e-4afd-b52e-bcd8400111e4] 2026-04-05 01:47:15.323743 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-05 01:47:15.329058 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-05 01:47:15.372646 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=a4ecbb0a-2836-403e-9178-b4fc03a4ee51] 2026-04-05 01:47:15.380682 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=bb381a94-6fda-41aa-85e5-5a8e9e212f55] 2026-04-05 01:47:15.381933 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-05 01:47:15.623543 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=2d4d21e8-ae21-4c18-add4-77055e4ecd22] 2026-04-05 01:47:16.122544 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=c0f7189e-4b5b-4fcd-ab1c-32b10bef3794] 2026-04-05 01:47:16.160646 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=63ec3cc2-9531-4856-9721-5af0c0f7a0a9] 2026-04-05 01:47:16.169330 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-05 01:47:18.663485 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=fe672449-71e9-4e4c-878a-a876f42bef0a] 2026-04-05 01:47:18.700455 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=e2ff4b61-09b0-482b-8c24-7e588d8d5007] 2026-04-05 01:47:18.752416 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=20d4ddc2-780d-4d78-9e94-d8812351d131] 2026-04-05 01:47:18.753695 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=1e425300-a1df-4921-af0a-0d26810bd200] 2026-04-05 01:47:18.768501 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f] 2026-04-05 01:47:18.798943 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=57f1796b-7846-459b-ac21-4d82893b0fc1] 2026-04-05 01:47:19.456217 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=40a85d3d-dd62-47af-8462-ef8be960ec77] 2026-04-05 01:47:19.467596 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-05 01:47:19.467834 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-05 01:47:19.468040 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-05 01:47:19.687774 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=52e0fabb-7fa9-4ba6-8461-5c998bc1ce00] 2026-04-05 01:47:19.688083 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=c5f4aae6-878f-442d-a800-2715675ae3ca] 2026-04-05 01:47:19.697089 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-05 01:47:19.698467 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-05 01:47:19.699101 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-05 01:47:19.703605 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-05 01:47:19.704024 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-05 01:47:19.705069 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-05 01:47:19.707262 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-05 01:47:19.710759 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-05 01:47:19.712285 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-05 01:47:19.853221 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=40eabfac-f854-4836-9b94-56813d5ab1f5] 2026-04-05 01:47:19.862606 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-05 01:47:19.862761 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=dc664b64-eccb-44f9-bf26-762af28fd8fc] 2026-04-05 01:47:19.878758 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-05 01:47:20.022650 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=d0c85d3b-abbe-40a9-a345-9d871b0f326f] 2026-04-05 01:47:20.023297 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3017444c-6987-4932-9806-ff01ce80660c] 2026-04-05 01:47:20.036170 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-05 01:47:20.037373 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-05 01:47:20.184862 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=726eb0b7-86e7-4dd6-b470-05ce44d439ed] 2026-04-05 01:47:20.197695 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-05 01:47:20.290399 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=2845a64f-012a-46e8-b004-1fa4d90b7ca9] 2026-04-05 01:47:20.310417 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-05 01:47:20.447837 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=b90db7ed-f148-4e8f-bfbb-0af1a5bb810a] 2026-04-05 01:47:20.457059 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-05 01:47:20.473382 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=d72fff87-d18d-47c7-a944-9c0be84ba78d] 2026-04-05 01:47:20.515132 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=a4c0292c-3f94-4c26-8ae0-90bdef4622d1] 2026-04-05 01:47:20.666459 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=47524031-95a4-48b9-a9d1-bcf75f736923] 2026-04-05 01:47:20.755931 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=12d030cc-352d-434f-ae0b-c0e15d2664a1] 2026-04-05 01:47:20.806217 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=53aa7b42-5c68-4707-8a6a-32a5fe8e4386] 2026-04-05 01:47:20.818362 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=8c0a2c06-e9ea-4b81-8ff5-19baa6f1b331] 2026-04-05 01:47:21.033421 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c1da3943-d6f3-4764-b4bf-0a3918ac8d4c] 2026-04-05 01:47:21.078958 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=634634f0-091c-48b3-8273-59795721d20b] 2026-04-05 01:47:21.153396 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=552b39e0-003d-4f31-81b3-2cca3aaf67a7] 2026-04-05 01:47:24.376845 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=4f5ef2e3-7aa6-4f30-ba31-04e52ae16e42] 2026-04-05 01:47:24.398937 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-05 01:47:24.408868 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-05 01:47:24.409697 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-05 01:47:24.422514 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-05 01:47:24.429642 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-05 01:47:24.438231 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-05 01:47:24.438305 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-05 01:47:25.699101 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=55b0e901-ac4f-4f43-a91f-102f08300400] 2026-04-05 01:47:25.706080 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-05 01:47:25.717608 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-05 01:47:25.718654 | orchestrator | local_file.inventory: Creating... 2026-04-05 01:47:25.725640 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=0a9396540859d05320a02084fb802c3ce77daafa] 2026-04-05 01:47:25.727755 | orchestrator | local_file.inventory: Creation complete after 0s [id=74a4d86f806ee0874f8261291565c63ffc7c5ec7] 2026-04-05 01:47:26.455004 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=55b0e901-ac4f-4f43-a91f-102f08300400] 2026-04-05 01:47:34.407615 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-05 01:47:34.411855 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-05 01:47:34.423283 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-05 01:47:34.433055 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-05 01:47:34.439449 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-05 01:47:34.439520 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-05 01:47:44.414821 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-05 01:47:44.414973 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-05 01:47:44.424269 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-05 01:47:44.434166 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-05 01:47:44.440614 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-05 01:47:44.440708 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-05 01:47:44.851302 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=3b3bfb6b-5e01-43cc-b0c6-8e694238545c] 2026-04-05 01:47:44.991859 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=694d3c5f-a347-4a57-8302-c64160b71607] 2026-04-05 01:47:45.484552 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=d255cc3a-ef7d-45db-b786-f673836b4828] 2026-04-05 01:47:54.423234 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-05 01:47:54.434412 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-05 01:47:54.441906 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-05 01:47:55.036285 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=12b0694f-979d-437e-a1ad-848bb8543aca] 2026-04-05 01:47:55.122693 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=6494d6c3-2ce9-4e59-88a3-47ed0e03dfaa] 2026-04-05 01:47:55.171439 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=471d5598-72fb-4474-b6da-f8756c4d78df] 2026-04-05 01:47:55.336014 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-05 01:47:55.336064 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-05 01:47:55.336072 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-05 01:47:55.336078 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-05 01:47:55.336085 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-05 01:47:55.336091 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4252726655182550080] 2026-04-05 01:47:55.336097 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-05 01:47:55.336104 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-05 01:47:55.336110 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-05 01:47:55.336117 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-05 01:47:55.336123 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-05 01:47:55.363076 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-05 01:47:58.578068 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=6494d6c3-2ce9-4e59-88a3-47ed0e03dfaa/19b95bad-a78c-4860-8023-fde2f6985c3d] 2026-04-05 01:47:58.588823 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=12b0694f-979d-437e-a1ad-848bb8543aca/2d4d21e8-ae21-4c18-add4-77055e4ecd22] 2026-04-05 01:47:58.714334 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=694d3c5f-a347-4a57-8302-c64160b71607/ff5ba5b2-ecfa-45a1-89e6-23476a027e2c] 2026-04-05 01:47:58.759192 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=694d3c5f-a347-4a57-8302-c64160b71607/e411545b-3ce6-4571-8392-eb6cf6edb95c] 2026-04-05 01:47:58.896756 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=694d3c5f-a347-4a57-8302-c64160b71607/bb381a94-6fda-41aa-85e5-5a8e9e212f55] 2026-04-05 01:47:59.902294 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=6494d6c3-2ce9-4e59-88a3-47ed0e03dfaa/0b219c4b-918e-4afd-b52e-bcd8400111e4] 2026-04-05 01:47:59.908566 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=12b0694f-979d-437e-a1ad-848bb8543aca/c4b125a1-49de-45bb-8abb-de12a0ea86b6] 2026-04-05 01:47:59.934486 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=6494d6c3-2ce9-4e59-88a3-47ed0e03dfaa/ff9c3d73-f1cc-45bb-b790-5886e9656564] 2026-04-05 01:47:59.947767 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=12b0694f-979d-437e-a1ad-848bb8543aca/a4ecbb0a-2836-403e-9178-b4fc03a4ee51] 2026-04-05 01:48:05.364023 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-05 01:48:15.364561 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-05 01:48:15.616386 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=cf3d507f-d9ed-4974-8036-5f8ba1e1f936] 2026-04-05 01:48:15.639122 | orchestrator | 2026-04-05 01:48:15.639588 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-05 01:48:15.639633 | orchestrator | 2026-04-05 01:48:15.639656 | orchestrator | Outputs: 2026-04-05 01:48:15.639675 | orchestrator | 2026-04-05 01:48:15.639695 | orchestrator | manager_address = 2026-04-05 01:48:15.639715 | orchestrator | private_key = 2026-04-05 01:48:15.855345 | orchestrator | ok: Runtime: 0:01:10.082328 2026-04-05 01:48:15.885635 | 2026-04-05 01:48:15.885753 | TASK [Fetch manager address] 2026-04-05 01:48:16.351053 | orchestrator | ok 2026-04-05 01:48:16.358565 | 2026-04-05 01:48:16.358689 | TASK [Set manager_host address] 2026-04-05 01:48:16.438623 | orchestrator | ok 2026-04-05 01:48:16.448753 | 2026-04-05 01:48:16.448951 | LOOP [Update ansible collections] 2026-04-05 01:48:17.339528 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 01:48:17.339853 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 01:48:17.339993 | orchestrator | Starting galaxy collection install process 2026-04-05 01:48:17.340033 | orchestrator | Process install dependency map 2026-04-05 01:48:17.340066 | orchestrator | Starting collection install process 2026-04-05 01:48:17.340095 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-04-05 01:48:17.340128 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-04-05 01:48:17.340160 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-05 01:48:17.340218 | orchestrator | ok: Item: commons Runtime: 0:00:00.569784 2026-04-05 01:48:18.237716 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 01:48:18.238044 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-05 01:48:18.238108 | orchestrator | Starting galaxy collection install process 2026-04-05 01:48:18.238139 | orchestrator | Process install dependency map 2026-04-05 01:48:18.238166 | orchestrator | Starting collection install process 2026-04-05 01:48:18.238191 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-04-05 01:48:18.238216 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-04-05 01:48:18.238240 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-05 01:48:18.238281 | orchestrator | ok: Item: services Runtime: 0:00:00.628435 2026-04-05 01:48:18.264745 | 2026-04-05 01:48:18.264962 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 01:48:29.012041 | orchestrator | ok 2026-04-05 01:48:29.022442 | 2026-04-05 01:48:29.022574 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 01:49:29.066181 | orchestrator | ok 2026-04-05 01:49:29.076467 | 2026-04-05 01:49:29.076581 | TASK [Fetch manager ssh hostkey] 2026-04-05 01:49:30.650772 | orchestrator | Output suppressed because no_log was given 2026-04-05 01:49:30.666722 | 2026-04-05 01:49:30.666994 | TASK [Get ssh keypair from terraform environment] 2026-04-05 01:49:31.210272 | orchestrator | ok: Runtime: 0:00:00.009329 2026-04-05 01:49:31.226825 | 2026-04-05 01:49:31.227019 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 01:49:31.265731 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-05 01:49:31.276080 | 2026-04-05 01:49:31.276220 | TASK [Run manager part 0] 2026-04-05 01:49:32.212206 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 01:49:32.256890 | orchestrator | 2026-04-05 01:49:32.256942 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-05 01:49:32.256951 | orchestrator | 2026-04-05 01:49:32.256966 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-05 01:49:34.327232 | orchestrator | ok: [testbed-manager] 2026-04-05 01:49:34.327311 | orchestrator | 2026-04-05 01:49:34.327338 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 01:49:34.327349 | orchestrator | 2026-04-05 01:49:34.327361 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:49:36.241262 | orchestrator | ok: [testbed-manager] 2026-04-05 01:49:36.241321 | orchestrator | 2026-04-05 01:49:36.241331 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 01:49:36.932074 | orchestrator | ok: [testbed-manager] 2026-04-05 01:49:36.932135 | orchestrator | 2026-04-05 01:49:36.932145 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 01:49:36.990475 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:49:36.990567 | orchestrator | 2026-04-05 01:49:36.990590 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-05 01:49:37.035379 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:49:37.035434 | orchestrator | 2026-04-05 01:49:37.035443 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-05 01:49:37.069094 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:49:37.069139 | orchestrator | 2026-04-05 01:49:37.069145 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-05 01:49:37.821129 | orchestrator | changed: [testbed-manager] 2026-04-05 01:49:37.821190 | orchestrator | 2026-04-05 01:49:37.821201 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-05 01:52:32.263692 | orchestrator | changed: [testbed-manager] 2026-04-05 01:52:32.263799 | orchestrator | 2026-04-05 01:52:32.263818 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 01:53:54.416496 | orchestrator | changed: [testbed-manager] 2026-04-05 01:53:54.417364 | orchestrator | 2026-04-05 01:53:54.417388 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-05 01:54:19.668368 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:19.668469 | orchestrator | 2026-04-05 01:54:19.668485 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-05 01:54:29.376867 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:29.376941 | orchestrator | 2026-04-05 01:54:29.376957 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 01:54:29.431118 | orchestrator | ok: [testbed-manager] 2026-04-05 01:54:29.431174 | orchestrator | 2026-04-05 01:54:29.431183 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-05 01:54:30.260234 | orchestrator | ok: [testbed-manager] 2026-04-05 01:54:30.260292 | orchestrator | 2026-04-05 01:54:30.260299 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-05 01:54:31.004067 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:31.006424 | orchestrator | 2026-04-05 01:54:31.006463 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-05 01:54:37.676937 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:37.677001 | orchestrator | 2026-04-05 01:54:37.677010 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-05 01:54:44.343319 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:44.343368 | orchestrator | 2026-04-05 01:54:44.343377 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-05 01:54:47.232804 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:47.232843 | orchestrator | 2026-04-05 01:54:47.232850 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-05 01:54:49.069306 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:49.069400 | orchestrator | 2026-04-05 01:54:49.069422 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-05 01:54:50.224157 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 01:54:50.224284 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 01:54:50.224302 | orchestrator | 2026-04-05 01:54:50.224319 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-05 01:54:50.273795 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 01:54:50.273843 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 01:54:50.273849 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 01:54:50.273855 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 01:54:53.852643 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-05 01:54:53.852772 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-05 01:54:53.852786 | orchestrator | 2026-04-05 01:54:53.852797 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-05 01:54:54.451732 | orchestrator | changed: [testbed-manager] 2026-04-05 01:54:54.451824 | orchestrator | 2026-04-05 01:54:54.451831 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-05 01:55:13.256334 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-05 01:55:13.256447 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-05 01:55:13.256463 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-05 01:55:13.256475 | orchestrator | 2026-04-05 01:55:13.256487 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-05 01:55:15.806495 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-05 01:55:15.806658 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-05 01:55:15.806696 | orchestrator | 2026-04-05 01:55:15.806713 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-05 01:55:15.806725 | orchestrator | 2026-04-05 01:55:15.806737 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:55:17.234824 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:17.234915 | orchestrator | 2026-04-05 01:55:17.234932 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 01:55:17.286459 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:17.286521 | orchestrator | 2026-04-05 01:55:17.286527 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 01:55:17.358607 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:17.358652 | orchestrator | 2026-04-05 01:55:17.358661 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 01:55:18.195028 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:18.195069 | orchestrator | 2026-04-05 01:55:18.195078 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 01:55:18.941858 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:18.941904 | orchestrator | 2026-04-05 01:55:18.941912 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 01:55:20.456487 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-05 01:55:20.456600 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-05 01:55:20.456626 | orchestrator | 2026-04-05 01:55:20.456647 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 01:55:21.916023 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:21.916097 | orchestrator | 2026-04-05 01:55:21.916110 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 01:55:23.773474 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 01:55:23.773569 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-05 01:55:23.773600 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-05 01:55:23.773613 | orchestrator | 2026-04-05 01:55:23.773627 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 01:55:23.844436 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:23.844495 | orchestrator | 2026-04-05 01:55:23.844502 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 01:55:23.914236 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:23.914340 | orchestrator | 2026-04-05 01:55:23.914366 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 01:55:24.501369 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:24.501468 | orchestrator | 2026-04-05 01:55:24.501490 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 01:55:24.582244 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:24.582300 | orchestrator | 2026-04-05 01:55:24.582306 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 01:55:25.478192 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 01:55:25.478274 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:25.478288 | orchestrator | 2026-04-05 01:55:25.478297 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 01:55:25.517466 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:25.517524 | orchestrator | 2026-04-05 01:55:25.517533 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 01:55:25.555100 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:25.555186 | orchestrator | 2026-04-05 01:55:25.555204 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 01:55:25.594433 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:25.594505 | orchestrator | 2026-04-05 01:55:25.594514 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 01:55:25.677847 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:25.677938 | orchestrator | 2026-04-05 01:55:25.677955 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 01:55:26.445851 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:26.445975 | orchestrator | 2026-04-05 01:55:26.445993 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-05 01:55:26.446006 | orchestrator | 2026-04-05 01:55:26.446081 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:55:27.950563 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:27.950598 | orchestrator | 2026-04-05 01:55:27.950604 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-05 01:55:28.929771 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:28.929902 | orchestrator | 2026-04-05 01:55:28.929919 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:55:28.929934 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-05 01:55:28.929946 | orchestrator | 2026-04-05 01:55:29.524360 | orchestrator | ok: Runtime: 0:05:57.451821 2026-04-05 01:55:29.533939 | 2026-04-05 01:55:29.534053 | TASK [Point out that the log in on the manager is now possible] 2026-04-05 01:55:29.564494 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-05 01:55:29.571460 | 2026-04-05 01:55:29.571563 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-05 01:55:29.619945 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-05 01:55:29.641375 | 2026-04-05 01:55:29.641604 | TASK [Run manager part 1 + 2] 2026-04-05 01:55:30.582970 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-05 01:55:30.643260 | orchestrator | 2026-04-05 01:55:30.643311 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-05 01:55:30.643318 | orchestrator | 2026-04-05 01:55:30.643330 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:55:33.231619 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:33.231718 | orchestrator | 2026-04-05 01:55:33.231741 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-05 01:55:33.268205 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:33.268258 | orchestrator | 2026-04-05 01:55:33.268269 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-05 01:55:33.322040 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:33.322094 | orchestrator | 2026-04-05 01:55:33.322102 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 01:55:33.374788 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:33.374849 | orchestrator | 2026-04-05 01:55:33.374861 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 01:55:33.459898 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:33.459963 | orchestrator | 2026-04-05 01:55:33.459974 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 01:55:33.526778 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:33.526847 | orchestrator | 2026-04-05 01:55:33.526861 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 01:55:33.579281 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-05 01:55:33.579327 | orchestrator | 2026-04-05 01:55:33.579333 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 01:55:34.347373 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:34.347426 | orchestrator | 2026-04-05 01:55:34.347434 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 01:55:34.395805 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:34.395856 | orchestrator | 2026-04-05 01:55:34.395863 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 01:55:35.810505 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:35.810574 | orchestrator | 2026-04-05 01:55:35.810585 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 01:55:36.429887 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:36.429939 | orchestrator | 2026-04-05 01:55:36.429947 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 01:55:37.640752 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:37.640810 | orchestrator | 2026-04-05 01:55:37.640820 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 01:55:54.062008 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:54.062136 | orchestrator | 2026-04-05 01:55:54.062155 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-05 01:55:54.793882 | orchestrator | ok: [testbed-manager] 2026-04-05 01:55:54.793984 | orchestrator | 2026-04-05 01:55:54.794003 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-05 01:55:54.854284 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:55:54.854386 | orchestrator | 2026-04-05 01:55:54.854404 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-05 01:55:55.881656 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:55.881809 | orchestrator | 2026-04-05 01:55:55.881835 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-05 01:55:56.930793 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:56.930893 | orchestrator | 2026-04-05 01:55:56.930911 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-05 01:55:57.509084 | orchestrator | changed: [testbed-manager] 2026-04-05 01:55:57.509123 | orchestrator | 2026-04-05 01:55:57.509129 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-05 01:55:57.552069 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-05 01:55:57.552165 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-05 01:55:57.552177 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-05 01:55:57.552185 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-05 01:56:00.062713 | orchestrator | changed: [testbed-manager] 2026-04-05 01:56:00.063042 | orchestrator | 2026-04-05 01:56:00.063085 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-05 01:56:08.910293 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-05 01:56:08.910375 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-05 01:56:08.910392 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-05 01:56:08.910404 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-05 01:56:08.910422 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-05 01:56:08.910434 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-05 01:56:08.910445 | orchestrator | 2026-04-05 01:56:08.910458 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-05 01:56:09.852929 | orchestrator | changed: [testbed-manager] 2026-04-05 01:56:09.852967 | orchestrator | 2026-04-05 01:56:09.852975 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-05 01:56:13.006588 | orchestrator | changed: [testbed-manager] 2026-04-05 01:56:13.006632 | orchestrator | 2026-04-05 01:56:13.006641 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-05 01:56:13.050357 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:56:13.050548 | orchestrator | 2026-04-05 01:56:13.050557 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-05 01:58:00.906141 | orchestrator | changed: [testbed-manager] 2026-04-05 01:58:00.906260 | orchestrator | 2026-04-05 01:58:00.906290 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 01:58:02.154697 | orchestrator | ok: [testbed-manager] 2026-04-05 01:58:02.154824 | orchestrator | 2026-04-05 01:58:02.154845 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 01:58:02.154860 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-05 01:58:02.154872 | orchestrator | 2026-04-05 01:58:02.793151 | orchestrator | ok: Runtime: 0:02:32.292958 2026-04-05 01:58:02.810645 | 2026-04-05 01:58:02.810789 | TASK [Reboot manager] 2026-04-05 01:58:04.352192 | orchestrator | ok: Runtime: 0:00:01.021903 2026-04-05 01:58:04.369672 | 2026-04-05 01:58:04.369828 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-05 01:58:21.383261 | orchestrator | ok 2026-04-05 01:58:21.394027 | 2026-04-05 01:58:21.394152 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-05 01:59:21.451301 | orchestrator | ok 2026-04-05 01:59:21.463191 | 2026-04-05 01:59:21.463343 | TASK [Deploy manager + bootstrap nodes] 2026-04-05 01:59:23.929817 | orchestrator | 2026-04-05 01:59:23.929985 | orchestrator | # DEPLOY MANAGER 2026-04-05 01:59:23.930004 | orchestrator | 2026-04-05 01:59:23.930076 | orchestrator | + set -e 2026-04-05 01:59:23.930092 | orchestrator | + echo 2026-04-05 01:59:23.930102 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-05 01:59:23.930115 | orchestrator | + echo 2026-04-05 01:59:23.930152 | orchestrator | + cat /opt/manager-vars.sh 2026-04-05 01:59:23.933395 | orchestrator | export NUMBER_OF_NODES=6 2026-04-05 01:59:23.933432 | orchestrator | 2026-04-05 01:59:23.933443 | orchestrator | export CEPH_VERSION=reef 2026-04-05 01:59:23.933453 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-05 01:59:23.933462 | orchestrator | export MANAGER_VERSION=9.5.0 2026-04-05 01:59:23.933480 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-05 01:59:23.933488 | orchestrator | 2026-04-05 01:59:23.933502 | orchestrator | export ARA=false 2026-04-05 01:59:23.933510 | orchestrator | export DEPLOY_MODE=manager 2026-04-05 01:59:23.933523 | orchestrator | export TEMPEST=false 2026-04-05 01:59:23.933532 | orchestrator | export IS_ZUUL=true 2026-04-05 01:59:23.933540 | orchestrator | 2026-04-05 01:59:23.933552 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 01:59:23.933561 | orchestrator | export EXTERNAL_API=false 2026-04-05 01:59:23.933569 | orchestrator | 2026-04-05 01:59:23.933577 | orchestrator | export IMAGE_USER=ubuntu 2026-04-05 01:59:23.933587 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-05 01:59:23.933595 | orchestrator | 2026-04-05 01:59:23.933603 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-05 01:59:23.934060 | orchestrator | 2026-04-05 01:59:23.934138 | orchestrator | + echo 2026-04-05 01:59:23.934154 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:59:23.934697 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:59:23.934729 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:59:23.934738 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:59:23.934748 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:59:23.935041 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:59:23.935057 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:59:23.935067 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:59:23.935075 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:59:23.935083 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:59:23.935092 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:59:23.935100 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:59:23.935108 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 01:59:23.935117 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 01:59:23.935125 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 01:59:23.935141 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 01:59:23.935150 | orchestrator | ++ export ARA=false 2026-04-05 01:59:23.935158 | orchestrator | ++ ARA=false 2026-04-05 01:59:23.935166 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:59:23.935174 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:59:23.935181 | orchestrator | ++ export TEMPEST=false 2026-04-05 01:59:23.935189 | orchestrator | ++ TEMPEST=false 2026-04-05 01:59:23.935197 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:59:23.935205 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:59:23.935213 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 01:59:23.935221 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 01:59:23.935229 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:59:23.935236 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:59:23.935244 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:59:23.935252 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:59:23.935260 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:59:23.935268 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:59:23.935277 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:59:23.935285 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:59:23.935293 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-05 01:59:23.992804 | orchestrator | + docker version 2026-04-05 01:59:24.114108 | orchestrator | Client: Docker Engine - Community 2026-04-05 01:59:24.114193 | orchestrator | Version: 27.5.1 2026-04-05 01:59:24.114210 | orchestrator | API version: 1.47 2026-04-05 01:59:24.114222 | orchestrator | Go version: go1.22.11 2026-04-05 01:59:24.114234 | orchestrator | Git commit: 9f9e405 2026-04-05 01:59:24.114245 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 01:59:24.114257 | orchestrator | OS/Arch: linux/amd64 2026-04-05 01:59:24.114269 | orchestrator | Context: default 2026-04-05 01:59:24.114297 | orchestrator | 2026-04-05 01:59:24.114310 | orchestrator | Server: Docker Engine - Community 2026-04-05 01:59:24.114321 | orchestrator | Engine: 2026-04-05 01:59:24.114333 | orchestrator | Version: 27.5.1 2026-04-05 01:59:24.114344 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-05 01:59:24.114395 | orchestrator | Go version: go1.22.11 2026-04-05 01:59:24.114407 | orchestrator | Git commit: 4c9b3b0 2026-04-05 01:59:24.114418 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-05 01:59:24.114429 | orchestrator | OS/Arch: linux/amd64 2026-04-05 01:59:24.114440 | orchestrator | Experimental: false 2026-04-05 01:59:24.114452 | orchestrator | containerd: 2026-04-05 01:59:24.114463 | orchestrator | Version: v2.2.2 2026-04-05 01:59:24.114474 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-05 01:59:24.114486 | orchestrator | runc: 2026-04-05 01:59:24.114497 | orchestrator | Version: 1.3.4 2026-04-05 01:59:24.114508 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-05 01:59:24.114519 | orchestrator | docker-init: 2026-04-05 01:59:24.114529 | orchestrator | Version: 0.19.0 2026-04-05 01:59:24.114542 | orchestrator | GitCommit: de40ad0 2026-04-05 01:59:24.116907 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-05 01:59:24.127479 | orchestrator | + set -e 2026-04-05 01:59:24.127559 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 01:59:24.127575 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 01:59:24.127587 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 01:59:24.127598 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 01:59:24.127609 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 01:59:24.127620 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 01:59:24.127632 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 01:59:24.127643 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 01:59:24.127654 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 01:59:24.127688 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 01:59:24.127699 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 01:59:24.127710 | orchestrator | ++ export ARA=false 2026-04-05 01:59:24.127721 | orchestrator | ++ ARA=false 2026-04-05 01:59:24.127733 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 01:59:24.127743 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 01:59:24.127754 | orchestrator | ++ export TEMPEST=false 2026-04-05 01:59:24.127765 | orchestrator | ++ TEMPEST=false 2026-04-05 01:59:24.127776 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 01:59:24.127786 | orchestrator | ++ IS_ZUUL=true 2026-04-05 01:59:24.127797 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 01:59:24.127809 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 01:59:24.127819 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 01:59:24.127830 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 01:59:24.127841 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 01:59:24.127851 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 01:59:24.127863 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 01:59:24.127874 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 01:59:24.127885 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 01:59:24.127896 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 01:59:24.127907 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 01:59:24.127930 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 01:59:24.127951 | orchestrator | ++ INTERACTIVE=false 2026-04-05 01:59:24.127962 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 01:59:24.127978 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 01:59:24.127989 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-05 01:59:24.128000 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-04-05 01:59:24.136288 | orchestrator | + set -e 2026-04-05 01:59:24.136430 | orchestrator | + VERSION=9.5.0 2026-04-05 01:59:24.136457 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:59:24.145891 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-05 01:59:24.145978 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:59:24.150327 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-05 01:59:24.155140 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-05 01:59:24.161195 | orchestrator | + set -e 2026-04-05 01:59:24.161271 | orchestrator | /opt/configuration ~ 2026-04-05 01:59:24.161290 | orchestrator | + pushd /opt/configuration 2026-04-05 01:59:24.161305 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 01:59:24.164567 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 01:59:24.167486 | orchestrator | ++ deactivate nondestructive 2026-04-05 01:59:24.167539 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:24.167553 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:24.167585 | orchestrator | ++ hash -r 2026-04-05 01:59:24.167596 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:24.167606 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 01:59:24.167615 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 01:59:24.167625 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 01:59:24.167635 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 01:59:24.167644 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 01:59:24.167654 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 01:59:24.167713 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 01:59:24.167725 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 01:59:24.167736 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 01:59:24.167746 | orchestrator | ++ export PATH 2026-04-05 01:59:24.167756 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:24.167766 | orchestrator | ++ '[' -z '' ']' 2026-04-05 01:59:24.167775 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 01:59:24.167785 | orchestrator | ++ PS1='(venv) ' 2026-04-05 01:59:24.167795 | orchestrator | ++ export PS1 2026-04-05 01:59:24.167804 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 01:59:24.167814 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 01:59:24.167824 | orchestrator | ++ hash -r 2026-04-05 01:59:24.167834 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-05 01:59:25.436152 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-05 01:59:25.437271 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-05 01:59:25.438633 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-05 01:59:25.439824 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-05 01:59:25.443112 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-05 01:59:25.468924 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-05 01:59:25.471290 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-05 01:59:25.472809 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-05 01:59:25.474275 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-05 01:59:25.503363 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-05 01:59:25.504616 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-05 01:59:25.506210 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-05 01:59:25.507292 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-05 01:59:25.511017 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-05 01:59:25.690228 | orchestrator | ++ which gilt 2026-04-05 01:59:25.695045 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-05 01:59:25.695110 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-05 01:59:25.944285 | orchestrator | osism.cfg-generics: 2026-04-05 01:59:26.093506 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-05 01:59:26.093567 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-05 01:59:26.093583 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-05 01:59:26.093833 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-05 01:59:26.974692 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-05 01:59:26.987867 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-05 01:59:27.304028 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-05 01:59:27.354013 | orchestrator | ~ 2026-04-05 01:59:27.354163 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 01:59:27.354180 | orchestrator | + deactivate 2026-04-05 01:59:27.354192 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 01:59:27.354205 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 01:59:27.354216 | orchestrator | + export PATH 2026-04-05 01:59:27.354227 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 01:59:27.354238 | orchestrator | + '[' -n '' ']' 2026-04-05 01:59:27.354252 | orchestrator | + hash -r 2026-04-05 01:59:27.354263 | orchestrator | + '[' -n '' ']' 2026-04-05 01:59:27.354274 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 01:59:27.354285 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 01:59:27.354296 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 01:59:27.354306 | orchestrator | + unset -f deactivate 2026-04-05 01:59:27.354317 | orchestrator | + popd 2026-04-05 01:59:27.356148 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-05 01:59:27.356187 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-05 01:59:27.357539 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-05 01:59:27.408954 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 01:59:27.409058 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-05 01:59:27.410385 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-05 01:59:27.469224 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 01:59:27.470159 | orchestrator | ++ semver 2024.2 2025.1 2026-04-05 01:59:27.522477 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 01:59:27.522558 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-05 01:59:27.615043 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 01:59:27.615125 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 01:59:27.615140 | orchestrator | ++ deactivate nondestructive 2026-04-05 01:59:27.615153 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:27.615164 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:27.615176 | orchestrator | ++ hash -r 2026-04-05 01:59:27.615299 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:27.615317 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 01:59:27.615328 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 01:59:27.615340 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 01:59:27.615356 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 01:59:27.615414 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 01:59:27.615428 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 01:59:27.615439 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 01:59:27.615568 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 01:59:27.615602 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 01:59:27.615619 | orchestrator | ++ export PATH 2026-04-05 01:59:27.615771 | orchestrator | ++ '[' -n '' ']' 2026-04-05 01:59:27.616158 | orchestrator | ++ '[' -z '' ']' 2026-04-05 01:59:27.616237 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 01:59:27.616253 | orchestrator | ++ PS1='(venv) ' 2026-04-05 01:59:27.616265 | orchestrator | ++ export PS1 2026-04-05 01:59:27.616276 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 01:59:27.616287 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 01:59:27.616299 | orchestrator | ++ hash -r 2026-04-05 01:59:27.616320 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-05 01:59:28.859344 | orchestrator | 2026-04-05 01:59:28.859456 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-05 01:59:28.859482 | orchestrator | 2026-04-05 01:59:28.859502 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 01:59:29.385724 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:29.385794 | orchestrator | 2026-04-05 01:59:29.385807 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 01:59:30.297425 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:30.297491 | orchestrator | 2026-04-05 01:59:30.297503 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-05 01:59:30.297529 | orchestrator | 2026-04-05 01:59:30.297538 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:59:32.500641 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:32.500781 | orchestrator | 2026-04-05 01:59:32.500799 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-05 01:59:32.556310 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:32.556393 | orchestrator | 2026-04-05 01:59:32.556409 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-05 01:59:33.019141 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:33.019225 | orchestrator | 2026-04-05 01:59:33.019240 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-05 01:59:33.068546 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:59:33.068633 | orchestrator | 2026-04-05 01:59:33.068650 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-05 01:59:33.437970 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:33.438161 | orchestrator | 2026-04-05 01:59:33.438178 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-05 01:59:33.768206 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:33.768305 | orchestrator | 2026-04-05 01:59:33.768322 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-05 01:59:33.904195 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:59:33.904290 | orchestrator | 2026-04-05 01:59:33.904308 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-05 01:59:33.904321 | orchestrator | 2026-04-05 01:59:33.904333 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:59:35.779827 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:35.779920 | orchestrator | 2026-04-05 01:59:35.779932 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-05 01:59:35.900781 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-05 01:59:35.901700 | orchestrator | 2026-04-05 01:59:35.901778 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-05 01:59:35.962055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-05 01:59:35.962156 | orchestrator | 2026-04-05 01:59:35.962174 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-05 01:59:37.142448 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-05 01:59:37.142559 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-05 01:59:37.142585 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-05 01:59:37.142608 | orchestrator | 2026-04-05 01:59:37.142630 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-05 01:59:39.048365 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-05 01:59:39.048473 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-05 01:59:39.048488 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-05 01:59:39.048500 | orchestrator | 2026-04-05 01:59:39.048532 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-05 01:59:39.715095 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 01:59:39.715197 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:39.715214 | orchestrator | 2026-04-05 01:59:39.715228 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-05 01:59:40.397179 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 01:59:40.397272 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:40.397287 | orchestrator | 2026-04-05 01:59:40.397299 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-05 01:59:40.461506 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:59:40.461598 | orchestrator | 2026-04-05 01:59:40.461614 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-05 01:59:40.816367 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:40.816473 | orchestrator | 2026-04-05 01:59:40.816489 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-05 01:59:40.891092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-05 01:59:40.891193 | orchestrator | 2026-04-05 01:59:40.891209 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-05 01:59:42.051602 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:42.051742 | orchestrator | 2026-04-05 01:59:42.051761 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-05 01:59:42.907080 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:42.907166 | orchestrator | 2026-04-05 01:59:42.907177 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-05 01:59:54.596577 | orchestrator | changed: [testbed-manager] 2026-04-05 01:59:54.596759 | orchestrator | 2026-04-05 01:59:54.596777 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-05 01:59:54.660573 | orchestrator | skipping: [testbed-manager] 2026-04-05 01:59:54.660706 | orchestrator | 2026-04-05 01:59:54.660743 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-05 01:59:54.660757 | orchestrator | 2026-04-05 01:59:54.660768 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 01:59:56.651630 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:56.651832 | orchestrator | 2026-04-05 01:59:56.651863 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-05 01:59:56.773190 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-05 01:59:56.773310 | orchestrator | 2026-04-05 01:59:56.773334 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-05 01:59:56.823849 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 01:59:56.823944 | orchestrator | 2026-04-05 01:59:56.823960 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-05 01:59:59.532185 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:59.533049 | orchestrator | 2026-04-05 01:59:59.533083 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-05 01:59:59.583391 | orchestrator | ok: [testbed-manager] 2026-04-05 01:59:59.583507 | orchestrator | 2026-04-05 01:59:59.583530 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-05 01:59:59.731477 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-05 01:59:59.731590 | orchestrator | 2026-04-05 01:59:59.731606 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-05 02:00:02.689421 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-05 02:00:02.689546 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-05 02:00:02.689572 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-05 02:00:02.689594 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-05 02:00:02.689614 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-05 02:00:02.689633 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-05 02:00:02.689652 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-05 02:00:02.689698 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-05 02:00:02.689721 | orchestrator | 2026-04-05 02:00:02.689741 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-05 02:00:03.384196 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:03.384287 | orchestrator | 2026-04-05 02:00:03.384307 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-05 02:00:04.061155 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:04.061279 | orchestrator | 2026-04-05 02:00:04.061306 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-05 02:00:04.152079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-05 02:00:04.152181 | orchestrator | 2026-04-05 02:00:04.152199 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-05 02:00:05.455087 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-05 02:00:05.455227 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-05 02:00:05.455247 | orchestrator | 2026-04-05 02:00:05.455261 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-05 02:00:06.131562 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:06.131665 | orchestrator | 2026-04-05 02:00:06.131732 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-05 02:00:06.196876 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:00:06.196946 | orchestrator | 2026-04-05 02:00:06.196954 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-05 02:00:06.299514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-05 02:00:06.299584 | orchestrator | 2026-04-05 02:00:06.299591 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-05 02:00:06.984506 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:06.984603 | orchestrator | 2026-04-05 02:00:06.984616 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-05 02:00:07.054247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-05 02:00:07.054333 | orchestrator | 2026-04-05 02:00:07.054346 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-05 02:00:08.507260 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 02:00:08.507347 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 02:00:08.507356 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:08.507364 | orchestrator | 2026-04-05 02:00:08.507372 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-05 02:00:09.145441 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:09.145522 | orchestrator | 2026-04-05 02:00:09.145536 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-05 02:00:09.202565 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:00:09.202668 | orchestrator | 2026-04-05 02:00:09.202735 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-05 02:00:09.334817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-05 02:00:09.334905 | orchestrator | 2026-04-05 02:00:09.334918 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-05 02:00:09.900006 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:09.900698 | orchestrator | 2026-04-05 02:00:09.900720 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-05 02:00:10.301251 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:10.301310 | orchestrator | 2026-04-05 02:00:10.301319 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-05 02:00:11.427131 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-05 02:00:11.427199 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-05 02:00:11.427210 | orchestrator | 2026-04-05 02:00:11.427220 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-05 02:00:12.014436 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:12.014522 | orchestrator | 2026-04-05 02:00:12.014551 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-05 02:00:12.382120 | orchestrator | ok: [testbed-manager] 2026-04-05 02:00:12.382218 | orchestrator | 2026-04-05 02:00:12.382234 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-05 02:00:12.735721 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:12.735802 | orchestrator | 2026-04-05 02:00:12.735812 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-05 02:00:12.790835 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:00:12.790950 | orchestrator | 2026-04-05 02:00:12.790974 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-05 02:00:12.877196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-05 02:00:12.877385 | orchestrator | 2026-04-05 02:00:12.877413 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-05 02:00:12.935546 | orchestrator | ok: [testbed-manager] 2026-04-05 02:00:12.935634 | orchestrator | 2026-04-05 02:00:12.935649 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-05 02:00:15.056774 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-05 02:00:15.057593 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-05 02:00:15.057615 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-05 02:00:15.057623 | orchestrator | 2026-04-05 02:00:15.057630 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-05 02:00:15.853195 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:15.853325 | orchestrator | 2026-04-05 02:00:15.853336 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-05 02:00:16.599216 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:16.599354 | orchestrator | 2026-04-05 02:00:16.599370 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-05 02:00:17.352201 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:17.352306 | orchestrator | 2026-04-05 02:00:17.352340 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-05 02:00:17.437909 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-05 02:00:17.438010 | orchestrator | 2026-04-05 02:00:17.438101 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-05 02:00:17.490947 | orchestrator | ok: [testbed-manager] 2026-04-05 02:00:17.491045 | orchestrator | 2026-04-05 02:00:17.491061 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-05 02:00:18.219909 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-05 02:00:18.220008 | orchestrator | 2026-04-05 02:00:18.220025 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-05 02:00:18.319400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-05 02:00:18.319500 | orchestrator | 2026-04-05 02:00:18.319516 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-05 02:00:19.073801 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:19.073888 | orchestrator | 2026-04-05 02:00:19.073901 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-05 02:00:19.719378 | orchestrator | ok: [testbed-manager] 2026-04-05 02:00:19.719498 | orchestrator | 2026-04-05 02:00:19.719529 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-05 02:00:19.774352 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:00:19.774452 | orchestrator | 2026-04-05 02:00:19.774468 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-05 02:00:19.835331 | orchestrator | ok: [testbed-manager] 2026-04-05 02:00:19.835422 | orchestrator | 2026-04-05 02:00:19.835437 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-05 02:00:20.722514 | orchestrator | changed: [testbed-manager] 2026-04-05 02:00:20.722617 | orchestrator | 2026-04-05 02:00:20.722633 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-05 02:01:37.218082 | orchestrator | changed: [testbed-manager] 2026-04-05 02:01:37.218197 | orchestrator | 2026-04-05 02:01:37.218215 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-05 02:01:38.252766 | orchestrator | ok: [testbed-manager] 2026-04-05 02:01:38.252868 | orchestrator | 2026-04-05 02:01:38.252884 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-05 02:01:38.303675 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:01:38.303818 | orchestrator | 2026-04-05 02:01:38.303833 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-05 02:01:40.864818 | orchestrator | changed: [testbed-manager] 2026-04-05 02:01:40.864923 | orchestrator | 2026-04-05 02:01:40.864940 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-05 02:01:40.983217 | orchestrator | ok: [testbed-manager] 2026-04-05 02:01:40.983352 | orchestrator | 2026-04-05 02:01:40.983379 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 02:01:40.983400 | orchestrator | 2026-04-05 02:01:40.983418 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-05 02:01:41.048012 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:01:41.048085 | orchestrator | 2026-04-05 02:01:41.048095 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-05 02:02:41.100918 | orchestrator | Pausing for 60 seconds 2026-04-05 02:02:41.101011 | orchestrator | changed: [testbed-manager] 2026-04-05 02:02:41.101020 | orchestrator | 2026-04-05 02:02:41.101028 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-05 02:02:44.742998 | orchestrator | changed: [testbed-manager] 2026-04-05 02:02:44.743100 | orchestrator | 2026-04-05 02:02:44.743117 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-05 02:03:47.035788 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-05 02:03:47.035892 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-05 02:03:47.035927 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-05 02:03:47.035939 | orchestrator | changed: [testbed-manager] 2026-04-05 02:03:47.035951 | orchestrator | 2026-04-05 02:03:47.035962 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-05 02:03:58.435431 | orchestrator | changed: [testbed-manager] 2026-04-05 02:03:58.435557 | orchestrator | 2026-04-05 02:03:58.435583 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-05 02:03:58.535891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-05 02:03:58.535982 | orchestrator | 2026-04-05 02:03:58.535996 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 02:03:58.536007 | orchestrator | 2026-04-05 02:03:58.536018 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-05 02:03:58.595772 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:03:58.595880 | orchestrator | 2026-04-05 02:03:58.595911 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-05 02:03:58.682346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-05 02:03:58.682469 | orchestrator | 2026-04-05 02:03:58.682495 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-05 02:03:59.572619 | orchestrator | changed: [testbed-manager] 2026-04-05 02:03:59.572769 | orchestrator | 2026-04-05 02:03:59.572788 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-05 02:04:03.095098 | orchestrator | ok: [testbed-manager] 2026-04-05 02:04:03.095203 | orchestrator | 2026-04-05 02:04:03.095221 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-05 02:04:03.166850 | orchestrator | ok: [testbed-manager] => { 2026-04-05 02:04:03.166938 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-05 02:04:03.166952 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-05 02:04:03.166962 | orchestrator | "Checking running containers against expected versions...", 2026-04-05 02:04:03.166974 | orchestrator | "", 2026-04-05 02:04:03.166984 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-05 02:04:03.166995 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-05 02:04:03.167006 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167016 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-05 02:04:03.167026 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167036 | orchestrator | "", 2026-04-05 02:04:03.167046 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-05 02:04:03.167081 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-05 02:04:03.167092 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167102 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-05 02:04:03.167112 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167121 | orchestrator | "", 2026-04-05 02:04:03.167131 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-05 02:04:03.167141 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-05 02:04:03.167150 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167160 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-05 02:04:03.167170 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167179 | orchestrator | "", 2026-04-05 02:04:03.167189 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-05 02:04:03.167199 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-05 02:04:03.167208 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167218 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-05 02:04:03.167227 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167237 | orchestrator | "", 2026-04-05 02:04:03.167249 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-05 02:04:03.167259 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-05 02:04:03.167268 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167278 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-05 02:04:03.167287 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167297 | orchestrator | "", 2026-04-05 02:04:03.167307 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-05 02:04:03.167316 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.167326 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167336 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.167346 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167363 | orchestrator | "", 2026-04-05 02:04:03.167381 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-05 02:04:03.167399 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 02:04:03.167417 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167435 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 02:04:03.167452 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167469 | orchestrator | "", 2026-04-05 02:04:03.167486 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-05 02:04:03.167503 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 02:04:03.167521 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167538 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 02:04:03.167558 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167575 | orchestrator | "", 2026-04-05 02:04:03.167592 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-05 02:04:03.167611 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-05 02:04:03.167628 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167645 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-05 02:04:03.167662 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167680 | orchestrator | "", 2026-04-05 02:04:03.167763 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-05 02:04:03.167777 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 02:04:03.167788 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167801 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 02:04:03.167811 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167821 | orchestrator | "", 2026-04-05 02:04:03.167831 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-05 02:04:03.167852 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.167863 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167872 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.167882 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167892 | orchestrator | "", 2026-04-05 02:04:03.167902 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-05 02:04:03.167912 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.167922 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167931 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.167941 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.167952 | orchestrator | "", 2026-04-05 02:04:03.167961 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-05 02:04:03.167971 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.167981 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.167991 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.168000 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.168010 | orchestrator | "", 2026-04-05 02:04:03.168019 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-05 02:04:03.168029 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.168039 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.168049 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.168079 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.168090 | orchestrator | "", 2026-04-05 02:04:03.168099 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-05 02:04:03.168109 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.168128 | orchestrator | " Enabled: true", 2026-04-05 02:04:03.168138 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-05 02:04:03.168148 | orchestrator | " Status: ✅ MATCH", 2026-04-05 02:04:03.168158 | orchestrator | "", 2026-04-05 02:04:03.168168 | orchestrator | "=== Summary ===", 2026-04-05 02:04:03.168177 | orchestrator | "Errors (version mismatches): 0", 2026-04-05 02:04:03.168187 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-05 02:04:03.168197 | orchestrator | "", 2026-04-05 02:04:03.168207 | orchestrator | "✅ All running containers match expected versions!" 2026-04-05 02:04:03.168216 | orchestrator | ] 2026-04-05 02:04:03.168226 | orchestrator | } 2026-04-05 02:04:03.168237 | orchestrator | 2026-04-05 02:04:03.168247 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-05 02:04:03.238568 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:04:03.238661 | orchestrator | 2026-04-05 02:04:03.238676 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:04:03.238772 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-05 02:04:03.238795 | orchestrator | 2026-04-05 02:04:03.358215 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 02:04:03.358298 | orchestrator | + deactivate 2026-04-05 02:04:03.358311 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 02:04:03.358323 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 02:04:03.358332 | orchestrator | + export PATH 2026-04-05 02:04:03.358342 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 02:04:03.358352 | orchestrator | + '[' -n '' ']' 2026-04-05 02:04:03.358362 | orchestrator | + hash -r 2026-04-05 02:04:03.358371 | orchestrator | + '[' -n '' ']' 2026-04-05 02:04:03.358380 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 02:04:03.358389 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 02:04:03.358399 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 02:04:03.358408 | orchestrator | + unset -f deactivate 2026-04-05 02:04:03.358418 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-05 02:04:03.365819 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 02:04:03.365862 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 02:04:03.365898 | orchestrator | + local max_attempts=60 2026-04-05 02:04:03.365909 | orchestrator | + local name=ceph-ansible 2026-04-05 02:04:03.365918 | orchestrator | + local attempt_num=1 2026-04-05 02:04:03.366596 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:04:03.406558 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:04:03.406654 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 02:04:03.406671 | orchestrator | + local max_attempts=60 2026-04-05 02:04:03.406683 | orchestrator | + local name=kolla-ansible 2026-04-05 02:04:03.406728 | orchestrator | + local attempt_num=1 2026-04-05 02:04:03.406994 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 02:04:03.444885 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:04:03.444975 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 02:04:03.444990 | orchestrator | + local max_attempts=60 2026-04-05 02:04:03.445003 | orchestrator | + local name=osism-ansible 2026-04-05 02:04:03.445014 | orchestrator | + local attempt_num=1 2026-04-05 02:04:03.445621 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 02:04:03.490442 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:04:03.490519 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 02:04:03.490529 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 02:04:04.231239 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-05 02:04:04.415508 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-05 02:04:04.415607 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 02:04:04.415623 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 02:04:04.415636 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-05 02:04:04.415650 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-05 02:04:04.415681 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-05 02:04:04.415758 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-05 02:04:04.415770 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-05 02:04:04.415781 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-05 02:04:04.415792 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-05 02:04:04.415803 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-05 02:04:04.415814 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-05 02:04:04.415825 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-05 02:04:04.415860 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-05 02:04:04.415871 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-05 02:04:04.415884 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-05 02:04:04.422242 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-05 02:04:04.476409 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 02:04:04.476497 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-05 02:04:04.481095 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-05 02:04:16.793082 | orchestrator | 2026-04-05 02:04:16 | INFO  | Task 9a146b45-f58b-413c-8455-9cc9c88e02b3 (resolvconf) was prepared for execution. 2026-04-05 02:04:16.793188 | orchestrator | 2026-04-05 02:04:16 | INFO  | It takes a moment until task 9a146b45-f58b-413c-8455-9cc9c88e02b3 (resolvconf) has been started and output is visible here. 2026-04-05 02:04:31.281308 | orchestrator | 2026-04-05 02:04:31.281428 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-05 02:04:31.281445 | orchestrator | 2026-04-05 02:04:31.281456 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 02:04:31.281466 | orchestrator | Sunday 05 April 2026 02:04:20 +0000 (0:00:00.142) 0:00:00.142 ********** 2026-04-05 02:04:31.281475 | orchestrator | ok: [testbed-manager] 2026-04-05 02:04:31.281485 | orchestrator | 2026-04-05 02:04:31.281494 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 02:04:31.281504 | orchestrator | Sunday 05 April 2026 02:04:24 +0000 (0:00:03.941) 0:00:04.083 ********** 2026-04-05 02:04:31.281513 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:04:31.281522 | orchestrator | 2026-04-05 02:04:31.281531 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 02:04:31.281540 | orchestrator | Sunday 05 April 2026 02:04:24 +0000 (0:00:00.068) 0:00:04.152 ********** 2026-04-05 02:04:31.281549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-05 02:04:31.281559 | orchestrator | 2026-04-05 02:04:31.281568 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 02:04:31.281576 | orchestrator | Sunday 05 April 2026 02:04:25 +0000 (0:00:00.094) 0:00:04.246 ********** 2026-04-05 02:04:31.281603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 02:04:31.281613 | orchestrator | 2026-04-05 02:04:31.281623 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 02:04:31.281638 | orchestrator | Sunday 05 April 2026 02:04:25 +0000 (0:00:00.092) 0:00:04.339 ********** 2026-04-05 02:04:31.281652 | orchestrator | ok: [testbed-manager] 2026-04-05 02:04:31.281666 | orchestrator | 2026-04-05 02:04:31.281680 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 02:04:31.281750 | orchestrator | Sunday 05 April 2026 02:04:26 +0000 (0:00:01.196) 0:00:05.535 ********** 2026-04-05 02:04:31.281761 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:04:31.281769 | orchestrator | 2026-04-05 02:04:31.281778 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 02:04:31.281787 | orchestrator | Sunday 05 April 2026 02:04:26 +0000 (0:00:00.061) 0:00:05.596 ********** 2026-04-05 02:04:31.281818 | orchestrator | ok: [testbed-manager] 2026-04-05 02:04:31.281827 | orchestrator | 2026-04-05 02:04:31.281836 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 02:04:31.281845 | orchestrator | Sunday 05 April 2026 02:04:26 +0000 (0:00:00.548) 0:00:06.144 ********** 2026-04-05 02:04:31.281856 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:04:31.281866 | orchestrator | 2026-04-05 02:04:31.281876 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 02:04:31.281887 | orchestrator | Sunday 05 April 2026 02:04:27 +0000 (0:00:00.084) 0:00:06.229 ********** 2026-04-05 02:04:31.281898 | orchestrator | changed: [testbed-manager] 2026-04-05 02:04:31.281908 | orchestrator | 2026-04-05 02:04:31.281917 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 02:04:31.281927 | orchestrator | Sunday 05 April 2026 02:04:27 +0000 (0:00:00.573) 0:00:06.802 ********** 2026-04-05 02:04:31.281937 | orchestrator | changed: [testbed-manager] 2026-04-05 02:04:31.281947 | orchestrator | 2026-04-05 02:04:31.281957 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 02:04:31.281968 | orchestrator | Sunday 05 April 2026 02:04:28 +0000 (0:00:01.136) 0:00:07.938 ********** 2026-04-05 02:04:31.281979 | orchestrator | ok: [testbed-manager] 2026-04-05 02:04:31.281989 | orchestrator | 2026-04-05 02:04:31.281999 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 02:04:31.282010 | orchestrator | Sunday 05 April 2026 02:04:29 +0000 (0:00:00.979) 0:00:08.917 ********** 2026-04-05 02:04:31.282075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-05 02:04:31.282086 | orchestrator | 2026-04-05 02:04:31.282097 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 02:04:31.282107 | orchestrator | Sunday 05 April 2026 02:04:29 +0000 (0:00:00.083) 0:00:09.001 ********** 2026-04-05 02:04:31.282117 | orchestrator | changed: [testbed-manager] 2026-04-05 02:04:31.282145 | orchestrator | 2026-04-05 02:04:31.282156 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:04:31.282167 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 02:04:31.282178 | orchestrator | 2026-04-05 02:04:31.282188 | orchestrator | 2026-04-05 02:04:31.282198 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:04:31.282210 | orchestrator | Sunday 05 April 2026 02:04:31 +0000 (0:00:01.223) 0:00:10.224 ********** 2026-04-05 02:04:31.282218 | orchestrator | =============================================================================== 2026-04-05 02:04:31.282227 | orchestrator | Gathering Facts --------------------------------------------------------- 3.94s 2026-04-05 02:04:31.282235 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2026-04-05 02:04:31.282244 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.20s 2026-04-05 02:04:31.282253 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.14s 2026-04-05 02:04:31.282261 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-04-05 02:04:31.282270 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-04-05 02:04:31.282341 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-04-05 02:04:31.282354 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-05 02:04:31.282363 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-04-05 02:04:31.282371 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-05 02:04:31.282380 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-05 02:04:31.282389 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-04-05 02:04:31.282406 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-04-05 02:04:31.598173 | orchestrator | + osism apply sshconfig 2026-04-05 02:04:43.697178 | orchestrator | 2026-04-05 02:04:43 | INFO  | Task b6cab92e-685d-45b4-9dda-bf1515cffee2 (sshconfig) was prepared for execution. 2026-04-05 02:04:43.697256 | orchestrator | 2026-04-05 02:04:43 | INFO  | It takes a moment until task b6cab92e-685d-45b4-9dda-bf1515cffee2 (sshconfig) has been started and output is visible here. 2026-04-05 02:04:56.085327 | orchestrator | 2026-04-05 02:04:56.085434 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-05 02:04:56.085451 | orchestrator | 2026-04-05 02:04:56.085509 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-05 02:04:56.085523 | orchestrator | Sunday 05 April 2026 02:04:48 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-04-05 02:04:56.085535 | orchestrator | ok: [testbed-manager] 2026-04-05 02:04:56.085547 | orchestrator | 2026-04-05 02:04:56.085559 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-05 02:04:56.085570 | orchestrator | Sunday 05 April 2026 02:04:48 +0000 (0:00:00.538) 0:00:00.699 ********** 2026-04-05 02:04:56.085581 | orchestrator | changed: [testbed-manager] 2026-04-05 02:04:56.085593 | orchestrator | 2026-04-05 02:04:56.085604 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-05 02:04:56.085615 | orchestrator | Sunday 05 April 2026 02:04:49 +0000 (0:00:00.539) 0:00:01.239 ********** 2026-04-05 02:04:56.085626 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-05 02:04:56.085637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-05 02:04:56.085648 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-05 02:04:56.085659 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-05 02:04:56.085670 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-05 02:04:56.085681 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-05 02:04:56.085730 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-05 02:04:56.085742 | orchestrator | 2026-04-05 02:04:56.085753 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-05 02:04:56.085764 | orchestrator | Sunday 05 April 2026 02:04:55 +0000 (0:00:05.951) 0:00:07.190 ********** 2026-04-05 02:04:56.085775 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:04:56.085786 | orchestrator | 2026-04-05 02:04:56.085797 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-05 02:04:56.085808 | orchestrator | Sunday 05 April 2026 02:04:55 +0000 (0:00:00.086) 0:00:07.276 ********** 2026-04-05 02:04:56.085819 | orchestrator | changed: [testbed-manager] 2026-04-05 02:04:56.085829 | orchestrator | 2026-04-05 02:04:56.085840 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:04:56.085852 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:04:56.085864 | orchestrator | 2026-04-05 02:04:56.085877 | orchestrator | 2026-04-05 02:04:56.085891 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:04:56.085904 | orchestrator | Sunday 05 April 2026 02:04:55 +0000 (0:00:00.608) 0:00:07.885 ********** 2026-04-05 02:04:56.085918 | orchestrator | =============================================================================== 2026-04-05 02:04:56.085930 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.95s 2026-04-05 02:04:56.085943 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2026-04-05 02:04:56.085956 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-04-05 02:04:56.085970 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-04-05 02:04:56.086006 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-05 02:04:56.424161 | orchestrator | + osism apply known-hosts 2026-04-05 02:05:08.454398 | orchestrator | 2026-04-05 02:05:08 | INFO  | Task 41a17eb7-db0f-406f-a90a-fd648ce25b3a (known-hosts) was prepared for execution. 2026-04-05 02:05:08.454462 | orchestrator | 2026-04-05 02:05:08 | INFO  | It takes a moment until task 41a17eb7-db0f-406f-a90a-fd648ce25b3a (known-hosts) has been started and output is visible here. 2026-04-05 02:05:25.770905 | orchestrator | 2026-04-05 02:05:25.771017 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-05 02:05:25.771033 | orchestrator | 2026-04-05 02:05:25.771045 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-05 02:05:25.771058 | orchestrator | Sunday 05 April 2026 02:05:12 +0000 (0:00:00.185) 0:00:00.185 ********** 2026-04-05 02:05:25.771070 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 02:05:25.771081 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 02:05:25.771093 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 02:05:25.771104 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 02:05:25.771114 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 02:05:25.771125 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 02:05:25.771136 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 02:05:25.771147 | orchestrator | 2026-04-05 02:05:25.771158 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-05 02:05:25.771170 | orchestrator | Sunday 05 April 2026 02:05:18 +0000 (0:00:06.354) 0:00:06.540 ********** 2026-04-05 02:05:25.771182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 02:05:25.771195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 02:05:25.771206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 02:05:25.771217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 02:05:25.771227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 02:05:25.771247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 02:05:25.771259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 02:05:25.771270 | orchestrator | 2026-04-05 02:05:25.771281 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:25.771292 | orchestrator | Sunday 05 April 2026 02:05:18 +0000 (0:00:00.181) 0:00:06.721 ********** 2026-04-05 02:05:25.771304 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFLov3npE9DgW57m951nv4vtAJd1vT8cTSWLSa3CwPxH) 2026-04-05 02:05:25.771323 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeEyaT2lhs6C+bv0rvhhz9xTXl6/zxL/npMgI07AKMm9lwsNI7/CqCMulahiJPYYxXRHDcpt6Ut9fV09lBQHBbsrB7yAXFjMTCYNB6oDmhqmi4WygIQAe6Dz1PN4HIN3ImsuGkUA9NGgrRcGfkZ2LvK/GAAOcrLslGvsoi5/Xh5UY2vtQoT73AagfR/aImkMjTn1f3/NWTS36C47kzq2Okmm9EuSR+ggNltpdsQMjeYK1N1oYdaR1cApT2mVNr58H0ZRoFyZhpJ71g4wjDPs+2cir9mBuaqlQb5mHgJFRKWTLUSlglCHyN/VJ3eZyU+lyNss225B0Et1bVkEm2X2MCOj2Oo3mdMi66e9WwDt3ymaSm+5zw9x3WhsitdSER/bAAWPrXZNYW4/TfLaBmklgzxXxstxR52jtswpDRNgir4Xb/fvUv4Cq+zCvNDgWrFmDYEhQBwDP2q2Jp7QduxelWC4vR29qoD1Mds1pzcNAey6qnU/7z9wp3AFfuMH2F+Ac=) 2026-04-05 02:05:25.771358 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDcYAOYGvjsef/Uk2972HoqrNeFyd4IEcdD3F1Gaqfz9i5tSAx7KdPvp2jMg+qJmOXrw7VQbkX5XsgSTiEiOymc=) 2026-04-05 02:05:25.771371 | orchestrator | 2026-04-05 02:05:25.771382 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:25.771393 | orchestrator | Sunday 05 April 2026 02:05:20 +0000 (0:00:01.242) 0:00:07.964 ********** 2026-04-05 02:05:25.771423 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/ATpPr4mvYf6soZURTgCcybPh1863XIn5aEmYX3wWehGFleojmStCf4jgqAnl0hky1G6p7YDyT9II8KpMZWyWGWS2sbvPAOsN93TRC5TcPVrd2fI0+pP3WYHtEMjpLB1RwzJ3pN5tAilofMYZRLbTnQQ8PjPW4ocDCzUQcBQpIonZl3jF9SfODiUFjwmDw7ftw2AsDuFSgoUJWoc1mRzwi0vz3LnDgCYm48syHR8adbwEk+fAvm1f+POUc8e5fHFTuPs52R2XHSBWkoz2pH3aYC6wNoL2Jadi73eAjyUgNEdezTclRjdqu9uX6Enuu5dDDT3y2vrBQlRYnyEKcA07Nd+UYz2nPHEyiEXsqgnT2z6hFYCXN94slPa3Gkc6Ew5LsYahbwWVIAUxLfcdyvIU4QQt7ID1AFyOWIFOzECEx9ZW4D6Mo63P6KS/T6khE3Xijx+Ac92K2IQmv8uL8r6/qebO3UQ8kRdxvVdau2V8cTVvdfVh5eZbpLkpsqTFxvc=) 2026-04-05 02:05:25.771435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAI8QOYwk6OcQYkjwYYCHbw/b7PS4DIzTkgZZ23bmCT4e1oUwoCzFWEjTqDAJEvsK4GKUf6e9CMhiCc0r0KkpVk=) 2026-04-05 02:05:25.771446 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJQv6eHMk2sVYVMyMeU4V8U5BRYUXa/EwfTTsNzzLBBE) 2026-04-05 02:05:25.771457 | orchestrator | 2026-04-05 02:05:25.771468 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:25.771479 | orchestrator | Sunday 05 April 2026 02:05:21 +0000 (0:00:01.126) 0:00:09.091 ********** 2026-04-05 02:05:25.771490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGGb8pSKEs+VkeAEJAMMKgbfj8XvdkKXs4LMHrJYOlwzJBuE7wNk6w9JorYuJrT6ceNCsMsgMv5/VPf92uRxcVw=) 2026-04-05 02:05:25.771502 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTKJchDAPFM0jUXgeQQo0bzsm0G1TdQyTRUoBUaorkuMJeW4vnjL+VOnxB0KXBJyRE/GDfj8kCbPvDs/YxegZ4u+awTfUelb+Cq9SDypYJZt0cBwJLtpRTyzgrPqy0BoQ6i3UIepsG++eTO7GAzO81wGrb833jVbSHLr3obnJ7QSYGQth5pz+wUs9xwOA5CZAJWS6O+8lT/bVk4i9LMeeKwXDBx+W30uTlYra6b37F9SnH3lOz3GDNhEVoMhSHULnMcVqiUzuMxQFy1Nw0w4h9KCgPyfRj6uza8qbsffaREX3ymjjcJW60mWQ4dbpB/BBpF0YmUy0KDwqmu8YzQlc39laXznET/xm+MM7I3ADsczJbi5lrivZeQFWIWvnaUJx3qCsBMyOT7jhKn2MRrVWqO3YNZhxipZvdAOiSxobzmLyvweJAdmfG+BOqvT932qOEQyTekMzSFvC0htNrvDrphXqI795zTrPnBgpQYKAByS6K5jRcfazutzm+WCggbl8=) 2026-04-05 02:05:25.771514 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGmMKjjA3whpYm8NUairp/sbFlXroIdLsAd4EDZWO8rU) 2026-04-05 02:05:25.771525 | orchestrator | 2026-04-05 02:05:25.771536 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:25.771547 | orchestrator | Sunday 05 April 2026 02:05:22 +0000 (0:00:01.036) 0:00:10.128 ********** 2026-04-05 02:05:25.771558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKvbd4UwKvls3uSHxZ9p+lrZJhX5asJVaV9+31PQtlnN2dzgnY3nxeaotoIGAyKpv6h6D+lmRb3Yg2LQyZvzuhw=) 2026-04-05 02:05:25.771569 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDq1+ABpPe+jb/zrJxbApWhMyZNfZlYF9ZEgO2FObeBU) 2026-04-05 02:05:25.771580 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVLn+fTrbZNC920elVvQwc9sKid01jNVc6y/57r2TOi6oRfn69tstd+2XT2eFd/KJG81W20qpaV0WGqgpuBbF2ayjnKXHWEy/FTofPJI4VL30ewukyL+gA3IpB1H2zUbJFAkliwH5FXviedZTPVERyDxiKZLpwX7unPZzLhI0eeISUW0hM8KRcD+mSsuD5XRCn9KXf1BKSpoLbZdKhm3aLJdYwNcbzs9XEM/cKWtxEKRD7XFboJ0vq+RXFANh5BwKXTGSOYhEOYJJA+ZWct1tl+G59FMcuDC6OC9KlXYCbMQz6n2AbsdQ+jqkyBxnEt7M73JSYAouVfthNaDWcY1YkTiEFbCDgeaO4yczSUxyYPCD8JVDOtQKNm5nEXGkis3lvi4lqEReqku+qG29MjESFYXUvPWhbvIyI++AClRBTOpe5UECrZy0wQj0R8Cfk7vMvbm6/1c4je6Pf8GIVAZFtURSRq3ZOgteDfCYKA0hf5ewkKjSBdoBzeAbDo337nac=) 2026-04-05 02:05:25.771610 | orchestrator | 2026-04-05 02:05:25.771630 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:25.771647 | orchestrator | Sunday 05 April 2026 02:05:23 +0000 (0:00:01.098) 0:00:11.227 ********** 2026-04-05 02:05:25.771783 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEGwkfbJSJVttOtvYHdwFbkPg755HjLo1Z5YMVP6O+OU97tBdxfC8wr4TEzSnwRUnycFofX5MbJbxJ+w0MmldI=) 2026-04-05 02:05:25.771809 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHvpcXVkwSlYr96/XULHUaHvxWTtaEMnt9vhrYpasMlrZ2989QCCeL2ovpGN75KHZg6CWLRV872o3M5ydpn52/J3iUvzjYJJJtSGNP9Z2Mg++c6uN1ZbStpWnszzf8cDwpRUGrGG4sSRDchoxw8028T0BouHcKGq04BwLNwcI7tuCmjwCXDj5p8bSmgW4RV4e/YmaJc3RWlkILgTb+ulcZlcPC+oHjgDhnFppgnHj2ZUR4yUJOEfHwGks9etcxgPdIOdhdgG1iebAtb5n+Lf3UkOBXsrdiWOqB1wWWEh+KAmIiL7ODWSSUBln+e2OGTdhfdl/oKwHW0N0OQlkt1HbpoIXSqtXNrAwCRbigdiILfklxXcdbd+5o7KgFm0CaTgBf4ZWanFpYHiLqDt6LmmBfHh0b4KPHgPuSfW7ku6+hEEtBJTbdCB1MvpXg6RCDv7vkXG9JhGddD2mHjev8q8PG+E9iLqXnbinCbxrzT4pYzxXEBN0/lVeqxVru6VLSOWs=) 2026-04-05 02:05:25.771829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEpPLf7ofsKwH0ky69/Wn9enc6AJK8ISDgvdc4Am9jzP) 2026-04-05 02:05:25.771848 | orchestrator | 2026-04-05 02:05:25.771865 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:25.771884 | orchestrator | Sunday 05 April 2026 02:05:24 +0000 (0:00:01.146) 0:00:12.373 ********** 2026-04-05 02:05:25.771907 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCZhH2HQ5ZNxNLAzJLyqlz4elogJgX+8iqiq1N7Z6htCzQJIpk275Fr+/v2dLlE5RUP2wc5GeI/QAHBzRE9hyz0=) 2026-04-05 02:05:37.084519 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/k/NMiYwv6oZKbj0r13htrAKdpak6PGOPY/xqZj3we) 2026-04-05 02:05:37.084638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiCbJd5kwjfJ7X1/pxhf4pWY3s8BlmdfzvqPcXbj9dX/iPmekAlsMJfHFJKIdfNYuwl5AoJ393Og5ueq3zMckBz3xyH7C+bU1W9Z/j/e6TM+Hp/f8HuWtPDy2Z3PodkFU/MLgax0PTCEisEWSXs1/FmIAJe8T9D2KME6JN74Vc2AC1FyQsdFBOAzJuy8ml+7zSmTwJ5HYJ+Qdqhhc+rqVcw1Tyol5OIRfyTIgcXBt9D3DbYgF1ftsNwqmKtkGLFZhCTNYFzJVx/oS2w5fczsSvCyBIXH1L5BMo0Q1qQxxBVxn93wspOGtvkEMtroBFFU27tCeEmzgeJ4D1+IkGhadO1S1ysge4b5F2HOXau8GZBAAuVGvZqZs2ZBaSpuNruR+BJ2y1/QUNecTj7JDK1u8xYaffyUE0TIC4SPb5xbgNw6MvGFcXmH2AlsFVpQINB2eijZaZoxokMcN2emuGvcdVYHpzegVr1rrcEwSSItws8UMSHiAdbYCp5I0vzzAwqx8=) 2026-04-05 02:05:37.084658 | orchestrator | 2026-04-05 02:05:37.084672 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:37.084765 | orchestrator | Sunday 05 April 2026 02:05:25 +0000 (0:00:01.143) 0:00:13.516 ********** 2026-04-05 02:05:37.084791 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkhkUiL7nOuadc987aPCvMKTtnvvvr/y6D/psiLw+1Cvg8TtCn1xigbfcRz4Fb029f6dgFQ+3eY8OJtvRzg3LloAvpwHY/dZiANqnPYwxsKopP1HlogUx1ll+hYEPjlU7OdhI+oFDvlCG2bj9YwfE+UMcueM3HmSWdmjwioEpBkf6LH1sacFs7QlyPZ5Otw1nAGolwM3IM0Y5VUu6gc0h1fH0HWxl0ymEQ2YCBer/8hzhSZJbSPvS9n8qbJ1fDKRReiVRcXoGubr2D9dFmcJ39Qqp7AQDH+68z+rFA0rgT/zWk0exapBNSgVzEPEsWimTdXow28tF85EYY41xUz011K6pIVyAhvFkEpYGmamX6gWS6aRa0ButGY/LVnhqKNvP6PT+T+ildOTRxSXSWixm3BvmCHjNXwGVrRGo0ZzU5jk07JD0mTe5LyNoHvNP6wPDIeUlHR4+zNEpkV6UxRuL+qZiETz+5GzZx+LCrnQre9qxKEc4UMSspxK1G59gAhek=) 2026-04-05 02:05:37.084813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAfzSNoUZ7Lgn/XHTmDdJ0wxvfoX0AzQk7FAyH8inBN+W5yfG5NjVqXDLwFje0OcyZx359iVVXoihRc8IAspFe4=) 2026-04-05 02:05:37.084865 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOBjknLpYOqCk39UAIBk9cRNWgNfdFj6c9Bo5lZFL2Ty) 2026-04-05 02:05:37.084878 | orchestrator | 2026-04-05 02:05:37.084890 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-05 02:05:37.084902 | orchestrator | Sunday 05 April 2026 02:05:26 +0000 (0:00:01.133) 0:00:14.650 ********** 2026-04-05 02:05:37.084914 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-05 02:05:37.084925 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-05 02:05:37.084936 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-05 02:05:37.084946 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-05 02:05:37.084957 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 02:05:37.084968 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 02:05:37.084979 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 02:05:37.084989 | orchestrator | 2026-04-05 02:05:37.085000 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-05 02:05:37.085012 | orchestrator | Sunday 05 April 2026 02:05:32 +0000 (0:00:05.518) 0:00:20.168 ********** 2026-04-05 02:05:37.085024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-05 02:05:37.085037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-05 02:05:37.085048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-05 02:05:37.085061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-05 02:05:37.085075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-05 02:05:37.085129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-05 02:05:37.085174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-05 02:05:37.085192 | orchestrator | 2026-04-05 02:05:37.085233 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:37.085281 | orchestrator | Sunday 05 April 2026 02:05:32 +0000 (0:00:00.187) 0:00:20.356 ********** 2026-04-05 02:05:37.085301 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFLov3npE9DgW57m951nv4vtAJd1vT8cTSWLSa3CwPxH) 2026-04-05 02:05:37.085351 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeEyaT2lhs6C+bv0rvhhz9xTXl6/zxL/npMgI07AKMm9lwsNI7/CqCMulahiJPYYxXRHDcpt6Ut9fV09lBQHBbsrB7yAXFjMTCYNB6oDmhqmi4WygIQAe6Dz1PN4HIN3ImsuGkUA9NGgrRcGfkZ2LvK/GAAOcrLslGvsoi5/Xh5UY2vtQoT73AagfR/aImkMjTn1f3/NWTS36C47kzq2Okmm9EuSR+ggNltpdsQMjeYK1N1oYdaR1cApT2mVNr58H0ZRoFyZhpJ71g4wjDPs+2cir9mBuaqlQb5mHgJFRKWTLUSlglCHyN/VJ3eZyU+lyNss225B0Et1bVkEm2X2MCOj2Oo3mdMi66e9WwDt3ymaSm+5zw9x3WhsitdSER/bAAWPrXZNYW4/TfLaBmklgzxXxstxR52jtswpDRNgir4Xb/fvUv4Cq+zCvNDgWrFmDYEhQBwDP2q2Jp7QduxelWC4vR29qoD1Mds1pzcNAey6qnU/7z9wp3AFfuMH2F+Ac=) 2026-04-05 02:05:37.085373 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDcYAOYGvjsef/Uk2972HoqrNeFyd4IEcdD3F1Gaqfz9i5tSAx7KdPvp2jMg+qJmOXrw7VQbkX5XsgSTiEiOymc=) 2026-04-05 02:05:37.085398 | orchestrator | 2026-04-05 02:05:37.085414 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:37.085429 | orchestrator | Sunday 05 April 2026 02:05:33 +0000 (0:00:01.114) 0:00:21.471 ********** 2026-04-05 02:05:37.085440 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/ATpPr4mvYf6soZURTgCcybPh1863XIn5aEmYX3wWehGFleojmStCf4jgqAnl0hky1G6p7YDyT9II8KpMZWyWGWS2sbvPAOsN93TRC5TcPVrd2fI0+pP3WYHtEMjpLB1RwzJ3pN5tAilofMYZRLbTnQQ8PjPW4ocDCzUQcBQpIonZl3jF9SfODiUFjwmDw7ftw2AsDuFSgoUJWoc1mRzwi0vz3LnDgCYm48syHR8adbwEk+fAvm1f+POUc8e5fHFTuPs52R2XHSBWkoz2pH3aYC6wNoL2Jadi73eAjyUgNEdezTclRjdqu9uX6Enuu5dDDT3y2vrBQlRYnyEKcA07Nd+UYz2nPHEyiEXsqgnT2z6hFYCXN94slPa3Gkc6Ew5LsYahbwWVIAUxLfcdyvIU4QQt7ID1AFyOWIFOzECEx9ZW4D6Mo63P6KS/T6khE3Xijx+Ac92K2IQmv8uL8r6/qebO3UQ8kRdxvVdau2V8cTVvdfVh5eZbpLkpsqTFxvc=) 2026-04-05 02:05:37.085451 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAI8QOYwk6OcQYkjwYYCHbw/b7PS4DIzTkgZZ23bmCT4e1oUwoCzFWEjTqDAJEvsK4GKUf6e9CMhiCc0r0KkpVk=) 2026-04-05 02:05:37.085463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJQv6eHMk2sVYVMyMeU4V8U5BRYUXa/EwfTTsNzzLBBE) 2026-04-05 02:05:37.085474 | orchestrator | 2026-04-05 02:05:37.085485 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:37.085496 | orchestrator | Sunday 05 April 2026 02:05:34 +0000 (0:00:01.109) 0:00:22.580 ********** 2026-04-05 02:05:37.085506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGmMKjjA3whpYm8NUairp/sbFlXroIdLsAd4EDZWO8rU) 2026-04-05 02:05:37.085517 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTKJchDAPFM0jUXgeQQo0bzsm0G1TdQyTRUoBUaorkuMJeW4vnjL+VOnxB0KXBJyRE/GDfj8kCbPvDs/YxegZ4u+awTfUelb+Cq9SDypYJZt0cBwJLtpRTyzgrPqy0BoQ6i3UIepsG++eTO7GAzO81wGrb833jVbSHLr3obnJ7QSYGQth5pz+wUs9xwOA5CZAJWS6O+8lT/bVk4i9LMeeKwXDBx+W30uTlYra6b37F9SnH3lOz3GDNhEVoMhSHULnMcVqiUzuMxQFy1Nw0w4h9KCgPyfRj6uza8qbsffaREX3ymjjcJW60mWQ4dbpB/BBpF0YmUy0KDwqmu8YzQlc39laXznET/xm+MM7I3ADsczJbi5lrivZeQFWIWvnaUJx3qCsBMyOT7jhKn2MRrVWqO3YNZhxipZvdAOiSxobzmLyvweJAdmfG+BOqvT932qOEQyTekMzSFvC0htNrvDrphXqI795zTrPnBgpQYKAByS6K5jRcfazutzm+WCggbl8=) 2026-04-05 02:05:37.085529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGGb8pSKEs+VkeAEJAMMKgbfj8XvdkKXs4LMHrJYOlwzJBuE7wNk6w9JorYuJrT6ceNCsMsgMv5/VPf92uRxcVw=) 2026-04-05 02:05:37.085540 | orchestrator | 2026-04-05 02:05:37.085550 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:37.085561 | orchestrator | Sunday 05 April 2026 02:05:35 +0000 (0:00:01.122) 0:00:23.703 ********** 2026-04-05 02:05:37.085572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKvbd4UwKvls3uSHxZ9p+lrZJhX5asJVaV9+31PQtlnN2dzgnY3nxeaotoIGAyKpv6h6D+lmRb3Yg2LQyZvzuhw=) 2026-04-05 02:05:37.085603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVLn+fTrbZNC920elVvQwc9sKid01jNVc6y/57r2TOi6oRfn69tstd+2XT2eFd/KJG81W20qpaV0WGqgpuBbF2ayjnKXHWEy/FTofPJI4VL30ewukyL+gA3IpB1H2zUbJFAkliwH5FXviedZTPVERyDxiKZLpwX7unPZzLhI0eeISUW0hM8KRcD+mSsuD5XRCn9KXf1BKSpoLbZdKhm3aLJdYwNcbzs9XEM/cKWtxEKRD7XFboJ0vq+RXFANh5BwKXTGSOYhEOYJJA+ZWct1tl+G59FMcuDC6OC9KlXYCbMQz6n2AbsdQ+jqkyBxnEt7M73JSYAouVfthNaDWcY1YkTiEFbCDgeaO4yczSUxyYPCD8JVDOtQKNm5nEXGkis3lvi4lqEReqku+qG29MjESFYXUvPWhbvIyI++AClRBTOpe5UECrZy0wQj0R8Cfk7vMvbm6/1c4je6Pf8GIVAZFtURSRq3ZOgteDfCYKA0hf5ewkKjSBdoBzeAbDo337nac=) 2026-04-05 02:05:41.646158 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDq1+ABpPe+jb/zrJxbApWhMyZNfZlYF9ZEgO2FObeBU) 2026-04-05 02:05:41.646336 | orchestrator | 2026-04-05 02:05:41.646381 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:41.646399 | orchestrator | Sunday 05 April 2026 02:05:37 +0000 (0:00:01.125) 0:00:24.828 ********** 2026-04-05 02:05:41.646415 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHvpcXVkwSlYr96/XULHUaHvxWTtaEMnt9vhrYpasMlrZ2989QCCeL2ovpGN75KHZg6CWLRV872o3M5ydpn52/J3iUvzjYJJJtSGNP9Z2Mg++c6uN1ZbStpWnszzf8cDwpRUGrGG4sSRDchoxw8028T0BouHcKGq04BwLNwcI7tuCmjwCXDj5p8bSmgW4RV4e/YmaJc3RWlkILgTb+ulcZlcPC+oHjgDhnFppgnHj2ZUR4yUJOEfHwGks9etcxgPdIOdhdgG1iebAtb5n+Lf3UkOBXsrdiWOqB1wWWEh+KAmIiL7ODWSSUBln+e2OGTdhfdl/oKwHW0N0OQlkt1HbpoIXSqtXNrAwCRbigdiILfklxXcdbd+5o7KgFm0CaTgBf4ZWanFpYHiLqDt6LmmBfHh0b4KPHgPuSfW7ku6+hEEtBJTbdCB1MvpXg6RCDv7vkXG9JhGddD2mHjev8q8PG+E9iLqXnbinCbxrzT4pYzxXEBN0/lVeqxVru6VLSOWs=) 2026-04-05 02:05:41.646430 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEGwkfbJSJVttOtvYHdwFbkPg755HjLo1Z5YMVP6O+OU97tBdxfC8wr4TEzSnwRUnycFofX5MbJbxJ+w0MmldI=) 2026-04-05 02:05:41.646443 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEpPLf7ofsKwH0ky69/Wn9enc6AJK8ISDgvdc4Am9jzP) 2026-04-05 02:05:41.646454 | orchestrator | 2026-04-05 02:05:41.646466 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:41.646477 | orchestrator | Sunday 05 April 2026 02:05:38 +0000 (0:00:01.110) 0:00:25.938 ********** 2026-04-05 02:05:41.646501 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiCbJd5kwjfJ7X1/pxhf4pWY3s8BlmdfzvqPcXbj9dX/iPmekAlsMJfHFJKIdfNYuwl5AoJ393Og5ueq3zMckBz3xyH7C+bU1W9Z/j/e6TM+Hp/f8HuWtPDy2Z3PodkFU/MLgax0PTCEisEWSXs1/FmIAJe8T9D2KME6JN74Vc2AC1FyQsdFBOAzJuy8ml+7zSmTwJ5HYJ+Qdqhhc+rqVcw1Tyol5OIRfyTIgcXBt9D3DbYgF1ftsNwqmKtkGLFZhCTNYFzJVx/oS2w5fczsSvCyBIXH1L5BMo0Q1qQxxBVxn93wspOGtvkEMtroBFFU27tCeEmzgeJ4D1+IkGhadO1S1ysge4b5F2HOXau8GZBAAuVGvZqZs2ZBaSpuNruR+BJ2y1/QUNecTj7JDK1u8xYaffyUE0TIC4SPb5xbgNw6MvGFcXmH2AlsFVpQINB2eijZaZoxokMcN2emuGvcdVYHpzegVr1rrcEwSSItws8UMSHiAdbYCp5I0vzzAwqx8=) 2026-04-05 02:05:41.646512 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCZhH2HQ5ZNxNLAzJLyqlz4elogJgX+8iqiq1N7Z6htCzQJIpk275Fr+/v2dLlE5RUP2wc5GeI/QAHBzRE9hyz0=) 2026-04-05 02:05:41.646524 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/k/NMiYwv6oZKbj0r13htrAKdpak6PGOPY/xqZj3we) 2026-04-05 02:05:41.646535 | orchestrator | 2026-04-05 02:05:41.646547 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-05 02:05:41.646560 | orchestrator | Sunday 05 April 2026 02:05:39 +0000 (0:00:01.090) 0:00:27.029 ********** 2026-04-05 02:05:41.646574 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAfzSNoUZ7Lgn/XHTmDdJ0wxvfoX0AzQk7FAyH8inBN+W5yfG5NjVqXDLwFje0OcyZx359iVVXoihRc8IAspFe4=) 2026-04-05 02:05:41.646605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkhkUiL7nOuadc987aPCvMKTtnvvvr/y6D/psiLw+1Cvg8TtCn1xigbfcRz4Fb029f6dgFQ+3eY8OJtvRzg3LloAvpwHY/dZiANqnPYwxsKopP1HlogUx1ll+hYEPjlU7OdhI+oFDvlCG2bj9YwfE+UMcueM3HmSWdmjwioEpBkf6LH1sacFs7QlyPZ5Otw1nAGolwM3IM0Y5VUu6gc0h1fH0HWxl0ymEQ2YCBer/8hzhSZJbSPvS9n8qbJ1fDKRReiVRcXoGubr2D9dFmcJ39Qqp7AQDH+68z+rFA0rgT/zWk0exapBNSgVzEPEsWimTdXow28tF85EYY41xUz011K6pIVyAhvFkEpYGmamX6gWS6aRa0ButGY/LVnhqKNvP6PT+T+ildOTRxSXSWixm3BvmCHjNXwGVrRGo0ZzU5jk07JD0mTe5LyNoHvNP6wPDIeUlHR4+zNEpkV6UxRuL+qZiETz+5GzZx+LCrnQre9qxKEc4UMSspxK1G59gAhek=) 2026-04-05 02:05:41.646620 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOBjknLpYOqCk39UAIBk9cRNWgNfdFj6c9Bo5lZFL2Ty) 2026-04-05 02:05:41.646633 | orchestrator | 2026-04-05 02:05:41.646646 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-05 02:05:41.646668 | orchestrator | Sunday 05 April 2026 02:05:40 +0000 (0:00:01.108) 0:00:28.138 ********** 2026-04-05 02:05:41.646681 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 02:05:41.646719 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 02:05:41.646730 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 02:05:41.646740 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 02:05:41.646749 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 02:05:41.646779 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 02:05:41.646790 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 02:05:41.646800 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:05:41.646809 | orchestrator | 2026-04-05 02:05:41.646819 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-05 02:05:41.646829 | orchestrator | Sunday 05 April 2026 02:05:40 +0000 (0:00:00.165) 0:00:28.303 ********** 2026-04-05 02:05:41.646838 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:05:41.646848 | orchestrator | 2026-04-05 02:05:41.646857 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-05 02:05:41.646872 | orchestrator | Sunday 05 April 2026 02:05:40 +0000 (0:00:00.073) 0:00:28.377 ********** 2026-04-05 02:05:41.646882 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:05:41.646891 | orchestrator | 2026-04-05 02:05:41.646901 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-05 02:05:41.646910 | orchestrator | Sunday 05 April 2026 02:05:40 +0000 (0:00:00.064) 0:00:28.441 ********** 2026-04-05 02:05:41.646920 | orchestrator | changed: [testbed-manager] 2026-04-05 02:05:41.646935 | orchestrator | 2026-04-05 02:05:41.646952 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:05:41.646967 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 02:05:41.646985 | orchestrator | 2026-04-05 02:05:41.647000 | orchestrator | 2026-04-05 02:05:41.647016 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:05:41.647031 | orchestrator | Sunday 05 April 2026 02:05:41 +0000 (0:00:00.741) 0:00:29.183 ********** 2026-04-05 02:05:41.647047 | orchestrator | =============================================================================== 2026-04-05 02:05:41.647061 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.35s 2026-04-05 02:05:41.647076 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.52s 2026-04-05 02:05:41.647095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-04-05 02:05:41.647113 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-04-05 02:05:41.647129 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-05 02:05:41.647145 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-05 02:05:41.647161 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-05 02:05:41.647177 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-04-05 02:05:41.647194 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-04-05 02:05:41.647209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-05 02:05:41.647226 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-05 02:05:41.647242 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-05 02:05:41.647259 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-05 02:05:41.647274 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-05 02:05:41.647301 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-05 02:05:41.647311 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-04-05 02:05:41.647320 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2026-04-05 02:05:41.647330 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-04-05 02:05:41.647340 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-05 02:05:41.647350 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-04-05 02:05:41.948946 | orchestrator | + osism apply squid 2026-04-05 02:05:54.105564 | orchestrator | 2026-04-05 02:05:54 | INFO  | Task c0c017c5-d395-47e8-b2f4-3fd542a17605 (squid) was prepared for execution. 2026-04-05 02:05:54.105669 | orchestrator | 2026-04-05 02:05:54 | INFO  | It takes a moment until task c0c017c5-d395-47e8-b2f4-3fd542a17605 (squid) has been started and output is visible here. 2026-04-05 02:07:49.821466 | orchestrator | 2026-04-05 02:07:49.821579 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-05 02:07:49.821593 | orchestrator | 2026-04-05 02:07:49.821603 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-05 02:07:49.821612 | orchestrator | Sunday 05 April 2026 02:05:58 +0000 (0:00:00.170) 0:00:00.170 ********** 2026-04-05 02:07:49.821621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 02:07:49.821632 | orchestrator | 2026-04-05 02:07:49.821641 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-05 02:07:49.821650 | orchestrator | Sunday 05 April 2026 02:05:58 +0000 (0:00:00.100) 0:00:00.270 ********** 2026-04-05 02:07:49.821659 | orchestrator | ok: [testbed-manager] 2026-04-05 02:07:49.821669 | orchestrator | 2026-04-05 02:07:49.821678 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-05 02:07:49.821686 | orchestrator | Sunday 05 April 2026 02:06:00 +0000 (0:00:01.572) 0:00:01.843 ********** 2026-04-05 02:07:49.821774 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-05 02:07:49.821784 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-05 02:07:49.821792 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-05 02:07:49.821801 | orchestrator | 2026-04-05 02:07:49.821809 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-05 02:07:49.821818 | orchestrator | Sunday 05 April 2026 02:06:01 +0000 (0:00:01.203) 0:00:03.046 ********** 2026-04-05 02:07:49.821827 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-05 02:07:49.821836 | orchestrator | 2026-04-05 02:07:49.821844 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-05 02:07:49.821853 | orchestrator | Sunday 05 April 2026 02:06:02 +0000 (0:00:01.179) 0:00:04.226 ********** 2026-04-05 02:07:49.821861 | orchestrator | ok: [testbed-manager] 2026-04-05 02:07:49.821870 | orchestrator | 2026-04-05 02:07:49.821878 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-05 02:07:49.821887 | orchestrator | Sunday 05 April 2026 02:06:02 +0000 (0:00:00.368) 0:00:04.594 ********** 2026-04-05 02:07:49.821897 | orchestrator | changed: [testbed-manager] 2026-04-05 02:07:49.821906 | orchestrator | 2026-04-05 02:07:49.821915 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-05 02:07:49.821923 | orchestrator | Sunday 05 April 2026 02:06:03 +0000 (0:00:00.983) 0:00:05.578 ********** 2026-04-05 02:07:49.821932 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-05 02:07:49.821947 | orchestrator | ok: [testbed-manager] 2026-04-05 02:07:49.821956 | orchestrator | 2026-04-05 02:07:49.821965 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-05 02:07:49.822000 | orchestrator | Sunday 05 April 2026 02:06:36 +0000 (0:00:32.751) 0:00:38.329 ********** 2026-04-05 02:07:49.822009 | orchestrator | changed: [testbed-manager] 2026-04-05 02:07:49.822071 | orchestrator | 2026-04-05 02:07:49.822082 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-05 02:07:49.822091 | orchestrator | Sunday 05 April 2026 02:06:48 +0000 (0:00:12.093) 0:00:50.423 ********** 2026-04-05 02:07:49.822100 | orchestrator | Pausing for 60 seconds 2026-04-05 02:07:49.822107 | orchestrator | changed: [testbed-manager] 2026-04-05 02:07:49.822116 | orchestrator | 2026-04-05 02:07:49.822125 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-05 02:07:49.822134 | orchestrator | Sunday 05 April 2026 02:07:48 +0000 (0:01:00.118) 0:01:50.542 ********** 2026-04-05 02:07:49.822142 | orchestrator | ok: [testbed-manager] 2026-04-05 02:07:49.822151 | orchestrator | 2026-04-05 02:07:49.822161 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-05 02:07:49.822170 | orchestrator | Sunday 05 April 2026 02:07:48 +0000 (0:00:00.066) 0:01:50.609 ********** 2026-04-05 02:07:49.822178 | orchestrator | changed: [testbed-manager] 2026-04-05 02:07:49.822186 | orchestrator | 2026-04-05 02:07:49.822195 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:07:49.822204 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:07:49.822212 | orchestrator | 2026-04-05 02:07:49.822220 | orchestrator | 2026-04-05 02:07:49.822228 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:07:49.822236 | orchestrator | Sunday 05 April 2026 02:07:49 +0000 (0:00:00.653) 0:01:51.263 ********** 2026-04-05 02:07:49.822245 | orchestrator | =============================================================================== 2026-04-05 02:07:49.822254 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.12s 2026-04-05 02:07:49.822263 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.75s 2026-04-05 02:07:49.822272 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.09s 2026-04-05 02:07:49.822297 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.57s 2026-04-05 02:07:49.822307 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.20s 2026-04-05 02:07:49.822316 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.18s 2026-04-05 02:07:49.822325 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2026-04-05 02:07:49.822333 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-04-05 02:07:49.822342 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-04-05 02:07:49.822350 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-04-05 02:07:49.822359 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-05 02:07:50.163926 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-05 02:07:50.164020 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-05 02:07:50.226368 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 02:07:50.226466 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-05 02:07:50.234675 | orchestrator | + set -e 2026-04-05 02:07:50.234808 | orchestrator | + NAMESPACE=kolla/release 2026-04-05 02:07:50.234824 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-05 02:07:50.240094 | orchestrator | ++ semver 9.5.0 9.0.0 2026-04-05 02:07:50.323319 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-05 02:07:50.325049 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-05 02:08:02.432616 | orchestrator | 2026-04-05 02:08:02 | INFO  | Task 68238f89-7ddc-422e-b1c2-df00eefad8fd (operator) was prepared for execution. 2026-04-05 02:08:02.432754 | orchestrator | 2026-04-05 02:08:02 | INFO  | It takes a moment until task 68238f89-7ddc-422e-b1c2-df00eefad8fd (operator) has been started and output is visible here. 2026-04-05 02:08:18.656287 | orchestrator | 2026-04-05 02:08:18.656365 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-05 02:08:18.656372 | orchestrator | 2026-04-05 02:08:18.656377 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 02:08:18.656382 | orchestrator | Sunday 05 April 2026 02:08:06 +0000 (0:00:00.155) 0:00:00.155 ********** 2026-04-05 02:08:18.656386 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:08:18.656391 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:08:18.656395 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:08:18.656399 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:08:18.656403 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:08:18.656406 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:08:18.656410 | orchestrator | 2026-04-05 02:08:18.656414 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-05 02:08:18.656418 | orchestrator | Sunday 05 April 2026 02:08:09 +0000 (0:00:03.367) 0:00:03.523 ********** 2026-04-05 02:08:18.656422 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:08:18.656426 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:08:18.656429 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:08:18.656445 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:08:18.656449 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:08:18.656453 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:08:18.656457 | orchestrator | 2026-04-05 02:08:18.656461 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-05 02:08:18.656464 | orchestrator | 2026-04-05 02:08:18.656468 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-05 02:08:18.656472 | orchestrator | Sunday 05 April 2026 02:08:10 +0000 (0:00:00.777) 0:00:04.301 ********** 2026-04-05 02:08:18.656476 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:08:18.656479 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:08:18.656483 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:08:18.656487 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:08:18.656491 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:08:18.656495 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:08:18.656499 | orchestrator | 2026-04-05 02:08:18.656503 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-05 02:08:18.656507 | orchestrator | Sunday 05 April 2026 02:08:10 +0000 (0:00:00.160) 0:00:04.462 ********** 2026-04-05 02:08:18.656510 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:08:18.656514 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:08:18.656518 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:08:18.656522 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:08:18.656525 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:08:18.656529 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:08:18.656533 | orchestrator | 2026-04-05 02:08:18.656537 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-05 02:08:18.656540 | orchestrator | Sunday 05 April 2026 02:08:11 +0000 (0:00:00.200) 0:00:04.662 ********** 2026-04-05 02:08:18.656544 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:08:18.656549 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:08:18.656553 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:08:18.656556 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:08:18.656560 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:08:18.656564 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:08:18.656568 | orchestrator | 2026-04-05 02:08:18.656571 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-05 02:08:18.656575 | orchestrator | Sunday 05 April 2026 02:08:11 +0000 (0:00:00.682) 0:00:05.344 ********** 2026-04-05 02:08:18.656579 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:08:18.656583 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:08:18.656586 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:08:18.656590 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:08:18.656594 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:08:18.656598 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:08:18.656616 | orchestrator | 2026-04-05 02:08:18.656620 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-05 02:08:18.656624 | orchestrator | Sunday 05 April 2026 02:08:12 +0000 (0:00:00.810) 0:00:06.154 ********** 2026-04-05 02:08:18.656628 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-05 02:08:18.656632 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-05 02:08:18.656636 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-05 02:08:18.656639 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-05 02:08:18.656643 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-05 02:08:18.656647 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-05 02:08:18.656650 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-05 02:08:18.656654 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-05 02:08:18.656658 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-05 02:08:18.656662 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-05 02:08:18.656665 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-05 02:08:18.656669 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-05 02:08:18.656673 | orchestrator | 2026-04-05 02:08:18.656677 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-05 02:08:18.656680 | orchestrator | Sunday 05 April 2026 02:08:13 +0000 (0:00:01.288) 0:00:07.443 ********** 2026-04-05 02:08:18.656684 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:08:18.656688 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:08:18.656726 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:08:18.656730 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:08:18.656733 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:08:18.656737 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:08:18.656741 | orchestrator | 2026-04-05 02:08:18.656745 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-05 02:08:18.656750 | orchestrator | Sunday 05 April 2026 02:08:15 +0000 (0:00:01.267) 0:00:08.711 ********** 2026-04-05 02:08:18.656798 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-05 02:08:18.656803 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-05 02:08:18.656807 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-05 02:08:18.656811 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 02:08:18.656826 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 02:08:18.656830 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 02:08:18.656834 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 02:08:18.656838 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 02:08:18.656842 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-05 02:08:18.656846 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-05 02:08:18.656849 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-05 02:08:18.656853 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-05 02:08:18.656857 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-05 02:08:18.656862 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-05 02:08:18.656867 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-05 02:08:18.656871 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-05 02:08:18.656876 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-05 02:08:18.656880 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-05 02:08:18.656885 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-05 02:08:18.656889 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-05 02:08:18.656899 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-05 02:08:18.656903 | orchestrator | 2026-04-05 02:08:18.656908 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-05 02:08:18.656913 | orchestrator | Sunday 05 April 2026 02:08:16 +0000 (0:00:01.339) 0:00:10.051 ********** 2026-04-05 02:08:18.656918 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:08:18.656922 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:08:18.656927 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:08:18.656931 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:08:18.656936 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:08:18.656940 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:08:18.656944 | orchestrator | 2026-04-05 02:08:18.656949 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-05 02:08:18.656953 | orchestrator | Sunday 05 April 2026 02:08:16 +0000 (0:00:00.186) 0:00:10.238 ********** 2026-04-05 02:08:18.656958 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:08:18.656962 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:08:18.656967 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:08:18.656971 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:08:18.656976 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:08:18.656980 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:08:18.656985 | orchestrator | 2026-04-05 02:08:18.656990 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-05 02:08:18.656994 | orchestrator | Sunday 05 April 2026 02:08:16 +0000 (0:00:00.202) 0:00:10.441 ********** 2026-04-05 02:08:18.656999 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:08:18.657003 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:08:18.657007 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:08:18.657012 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:08:18.657016 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:08:18.657021 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:08:18.657025 | orchestrator | 2026-04-05 02:08:18.657029 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-05 02:08:18.657034 | orchestrator | Sunday 05 April 2026 02:08:17 +0000 (0:00:00.562) 0:00:11.003 ********** 2026-04-05 02:08:18.657039 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:08:18.657043 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:08:18.657048 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:08:18.657052 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:08:18.657056 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:08:18.657061 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:08:18.657065 | orchestrator | 2026-04-05 02:08:18.657070 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-05 02:08:18.657074 | orchestrator | Sunday 05 April 2026 02:08:17 +0000 (0:00:00.176) 0:00:11.179 ********** 2026-04-05 02:08:18.657079 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 02:08:18.657089 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:08:18.657094 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 02:08:18.657098 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 02:08:18.657103 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 02:08:18.657107 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:08:18.657111 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:08:18.657116 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:08:18.657120 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 02:08:18.657125 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 02:08:18.657129 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:08:18.657134 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:08:18.657138 | orchestrator | 2026-04-05 02:08:18.657142 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-05 02:08:18.657147 | orchestrator | Sunday 05 April 2026 02:08:18 +0000 (0:00:00.690) 0:00:11.869 ********** 2026-04-05 02:08:18.657155 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:08:18.657160 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:08:18.657164 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:08:18.657169 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:08:18.657173 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:08:18.657177 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:08:18.657182 | orchestrator | 2026-04-05 02:08:18.657186 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-05 02:08:18.657191 | orchestrator | Sunday 05 April 2026 02:08:18 +0000 (0:00:00.148) 0:00:12.018 ********** 2026-04-05 02:08:18.657195 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:08:18.657200 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:08:18.657204 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:08:18.657209 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:08:18.657216 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:08:20.059450 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:08:20.059566 | orchestrator | 2026-04-05 02:08:20.059584 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-05 02:08:20.059598 | orchestrator | Sunday 05 April 2026 02:08:18 +0000 (0:00:00.166) 0:00:12.185 ********** 2026-04-05 02:08:20.059609 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:08:20.059620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:08:20.059631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:08:20.059642 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:08:20.059653 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:08:20.059664 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:08:20.059674 | orchestrator | 2026-04-05 02:08:20.059686 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-05 02:08:20.059766 | orchestrator | Sunday 05 April 2026 02:08:18 +0000 (0:00:00.155) 0:00:12.340 ********** 2026-04-05 02:08:20.059778 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:08:20.059789 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:08:20.059819 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:08:20.059831 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:08:20.059842 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:08:20.059853 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:08:20.059864 | orchestrator | 2026-04-05 02:08:20.059875 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-05 02:08:20.059886 | orchestrator | Sunday 05 April 2026 02:08:19 +0000 (0:00:00.722) 0:00:13.063 ********** 2026-04-05 02:08:20.059897 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:08:20.059908 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:08:20.059920 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:08:20.059931 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:08:20.059942 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:08:20.059953 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:08:20.059964 | orchestrator | 2026-04-05 02:08:20.059975 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:08:20.059988 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 02:08:20.060000 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 02:08:20.060011 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 02:08:20.060022 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 02:08:20.060033 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 02:08:20.060067 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 02:08:20.060078 | orchestrator | 2026-04-05 02:08:20.060089 | orchestrator | 2026-04-05 02:08:20.060100 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:08:20.060111 | orchestrator | Sunday 05 April 2026 02:08:19 +0000 (0:00:00.262) 0:00:13.325 ********** 2026-04-05 02:08:20.060122 | orchestrator | =============================================================================== 2026-04-05 02:08:20.060133 | orchestrator | Gathering Facts --------------------------------------------------------- 3.37s 2026-04-05 02:08:20.060144 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2026-04-05 02:08:20.060156 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2026-04-05 02:08:20.060167 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-04-05 02:08:20.060178 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-04-05 02:08:20.060189 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-04-05 02:08:20.060200 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2026-04-05 02:08:20.060211 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-04-05 02:08:20.060222 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2026-04-05 02:08:20.060233 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2026-04-05 02:08:20.060244 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-04-05 02:08:20.060255 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-04-05 02:08:20.060265 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-04-05 02:08:20.060276 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.19s 2026-04-05 02:08:20.060287 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-04-05 02:08:20.060298 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-04-05 02:08:20.060309 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-04-05 02:08:20.060320 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-04-05 02:08:20.060331 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-05 02:08:20.412538 | orchestrator | + osism apply --environment custom facts 2026-04-05 02:08:22.429897 | orchestrator | 2026-04-05 02:08:22 | INFO  | Trying to run play facts in environment custom 2026-04-05 02:08:32.616578 | orchestrator | 2026-04-05 02:08:32 | INFO  | Task 54c5f0d5-40a8-46fb-bfdc-32163ad38df5 (facts) was prepared for execution. 2026-04-05 02:08:32.616677 | orchestrator | 2026-04-05 02:08:32 | INFO  | It takes a moment until task 54c5f0d5-40a8-46fb-bfdc-32163ad38df5 (facts) has been started and output is visible here. 2026-04-05 02:09:17.463684 | orchestrator | 2026-04-05 02:09:17.463932 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-05 02:09:17.463955 | orchestrator | 2026-04-05 02:09:17.463968 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 02:09:17.463981 | orchestrator | Sunday 05 April 2026 02:08:36 +0000 (0:00:00.085) 0:00:00.085 ********** 2026-04-05 02:09:17.463993 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:17.464006 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:09:17.464018 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:17.464029 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:17.464040 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:17.464051 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:09:17.464097 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:09:17.464110 | orchestrator | 2026-04-05 02:09:17.464121 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-05 02:09:17.464132 | orchestrator | Sunday 05 April 2026 02:08:38 +0000 (0:00:01.435) 0:00:01.521 ********** 2026-04-05 02:09:17.464143 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:17.464157 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:17.464170 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:09:17.464184 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:17.464197 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:09:17.464210 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:17.464223 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:09:17.464236 | orchestrator | 2026-04-05 02:09:17.464249 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-05 02:09:17.464262 | orchestrator | 2026-04-05 02:09:17.464274 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 02:09:17.464287 | orchestrator | Sunday 05 April 2026 02:08:39 +0000 (0:00:01.182) 0:00:02.703 ********** 2026-04-05 02:09:17.464300 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.464312 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.464326 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.464338 | orchestrator | 2026-04-05 02:09:17.464351 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 02:09:17.464366 | orchestrator | Sunday 05 April 2026 02:08:39 +0000 (0:00:00.119) 0:00:02.823 ********** 2026-04-05 02:09:17.464378 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.464390 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.464403 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.464416 | orchestrator | 2026-04-05 02:09:17.464429 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 02:09:17.464441 | orchestrator | Sunday 05 April 2026 02:08:39 +0000 (0:00:00.218) 0:00:03.041 ********** 2026-04-05 02:09:17.464454 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.464467 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.464480 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.464492 | orchestrator | 2026-04-05 02:09:17.464506 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 02:09:17.464519 | orchestrator | Sunday 05 April 2026 02:08:40 +0000 (0:00:00.235) 0:00:03.277 ********** 2026-04-05 02:09:17.464535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:09:17.464547 | orchestrator | 2026-04-05 02:09:17.464559 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 02:09:17.464570 | orchestrator | Sunday 05 April 2026 02:08:40 +0000 (0:00:00.168) 0:00:03.445 ********** 2026-04-05 02:09:17.464581 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.464591 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.464602 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.464613 | orchestrator | 2026-04-05 02:09:17.464624 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 02:09:17.464635 | orchestrator | Sunday 05 April 2026 02:08:40 +0000 (0:00:00.474) 0:00:03.920 ********** 2026-04-05 02:09:17.464646 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:09:17.464657 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:09:17.464668 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:09:17.464679 | orchestrator | 2026-04-05 02:09:17.464712 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 02:09:17.464726 | orchestrator | Sunday 05 April 2026 02:08:40 +0000 (0:00:00.151) 0:00:04.071 ********** 2026-04-05 02:09:17.464737 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:17.464748 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:17.464759 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:17.464770 | orchestrator | 2026-04-05 02:09:17.464781 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 02:09:17.464801 | orchestrator | Sunday 05 April 2026 02:08:41 +0000 (0:00:01.068) 0:00:05.140 ********** 2026-04-05 02:09:17.464813 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.464824 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.464835 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.464846 | orchestrator | 2026-04-05 02:09:17.464857 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 02:09:17.464868 | orchestrator | Sunday 05 April 2026 02:08:42 +0000 (0:00:00.544) 0:00:05.685 ********** 2026-04-05 02:09:17.464879 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:17.464890 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:17.464901 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:17.464912 | orchestrator | 2026-04-05 02:09:17.464923 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 02:09:17.464994 | orchestrator | Sunday 05 April 2026 02:08:43 +0000 (0:00:01.084) 0:00:06.769 ********** 2026-04-05 02:09:17.465007 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:17.465018 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:17.465029 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:17.465039 | orchestrator | 2026-04-05 02:09:17.465050 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-05 02:09:17.465061 | orchestrator | Sunday 05 April 2026 02:08:59 +0000 (0:00:16.001) 0:00:22.771 ********** 2026-04-05 02:09:17.465072 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:09:17.465083 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:09:17.465094 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:09:17.465104 | orchestrator | 2026-04-05 02:09:17.465115 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-05 02:09:17.465148 | orchestrator | Sunday 05 April 2026 02:08:59 +0000 (0:00:00.104) 0:00:22.876 ********** 2026-04-05 02:09:17.465159 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:17.465170 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:17.465181 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:17.465192 | orchestrator | 2026-04-05 02:09:17.465208 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-05 02:09:17.465220 | orchestrator | Sunday 05 April 2026 02:09:07 +0000 (0:00:07.624) 0:00:30.500 ********** 2026-04-05 02:09:17.465230 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.465241 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.465252 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.465263 | orchestrator | 2026-04-05 02:09:17.465274 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-05 02:09:17.465285 | orchestrator | Sunday 05 April 2026 02:09:07 +0000 (0:00:00.460) 0:00:30.961 ********** 2026-04-05 02:09:17.465296 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-05 02:09:17.465307 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-05 02:09:17.465318 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-05 02:09:17.465329 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-05 02:09:17.465340 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-05 02:09:17.465350 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-05 02:09:17.465361 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-05 02:09:17.465372 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-05 02:09:17.465383 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-05 02:09:17.465394 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-05 02:09:17.465404 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-05 02:09:17.465415 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-05 02:09:17.465426 | orchestrator | 2026-04-05 02:09:17.465437 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 02:09:17.465456 | orchestrator | Sunday 05 April 2026 02:09:11 +0000 (0:00:03.673) 0:00:34.635 ********** 2026-04-05 02:09:17.465467 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.465478 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.465489 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.465500 | orchestrator | 2026-04-05 02:09:17.465511 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 02:09:17.465522 | orchestrator | 2026-04-05 02:09:17.465533 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 02:09:17.465544 | orchestrator | Sunday 05 April 2026 02:09:12 +0000 (0:00:01.329) 0:00:35.965 ********** 2026-04-05 02:09:17.465555 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:09:17.465565 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:09:17.465576 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:09:17.465587 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:17.465598 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:17.465609 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:17.465620 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:17.465630 | orchestrator | 2026-04-05 02:09:17.465641 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:09:17.465653 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:09:17.465665 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:09:17.465678 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:09:17.465689 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:09:17.465724 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:09:17.465735 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:09:17.465746 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:09:17.465757 | orchestrator | 2026-04-05 02:09:17.465768 | orchestrator | 2026-04-05 02:09:17.465780 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:09:17.465800 | orchestrator | Sunday 05 April 2026 02:09:17 +0000 (0:00:04.725) 0:00:40.690 ********** 2026-04-05 02:09:17.465832 | orchestrator | =============================================================================== 2026-04-05 02:09:17.465851 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.00s 2026-04-05 02:09:17.465869 | orchestrator | Install required packages (Debian) -------------------------------------- 7.62s 2026-04-05 02:09:17.465887 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.73s 2026-04-05 02:09:17.465903 | orchestrator | Copy fact files --------------------------------------------------------- 3.67s 2026-04-05 02:09:17.465919 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2026-04-05 02:09:17.465937 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.33s 2026-04-05 02:09:17.465967 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-04-05 02:09:17.705981 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-04-05 02:09:17.706219 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-04-05 02:09:17.706262 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.54s 2026-04-05 02:09:17.706301 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-04-05 02:09:17.706313 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-04-05 02:09:17.706324 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-04-05 02:09:17.706335 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2026-04-05 02:09:17.706347 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2026-04-05 02:09:17.706359 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-04-05 02:09:17.706370 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-04-05 02:09:17.706381 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-05 02:09:18.025841 | orchestrator | + osism apply bootstrap 2026-04-05 02:09:30.176522 | orchestrator | 2026-04-05 02:09:30 | INFO  | Task 98c1fd72-d8f2-42c5-a503-8851ad6404ab (bootstrap) was prepared for execution. 2026-04-05 02:09:30.176655 | orchestrator | 2026-04-05 02:09:30 | INFO  | It takes a moment until task 98c1fd72-d8f2-42c5-a503-8851ad6404ab (bootstrap) has been started and output is visible here. 2026-04-05 02:09:47.628787 | orchestrator | 2026-04-05 02:09:47.628935 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-05 02:09:47.628953 | orchestrator | 2026-04-05 02:09:47.628966 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-05 02:09:47.628978 | orchestrator | Sunday 05 April 2026 02:09:34 +0000 (0:00:00.178) 0:00:00.178 ********** 2026-04-05 02:09:47.628990 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:47.629003 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:47.629014 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:47.629025 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:47.629036 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:09:47.629047 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:09:47.629058 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:09:47.629069 | orchestrator | 2026-04-05 02:09:47.629081 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 02:09:47.629092 | orchestrator | 2026-04-05 02:09:47.629103 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 02:09:47.629115 | orchestrator | Sunday 05 April 2026 02:09:35 +0000 (0:00:00.288) 0:00:00.467 ********** 2026-04-05 02:09:47.629126 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:09:47.629137 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:09:47.629147 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:09:47.629158 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:47.629169 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:47.629180 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:47.629191 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:47.629202 | orchestrator | 2026-04-05 02:09:47.629213 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-05 02:09:47.629224 | orchestrator | 2026-04-05 02:09:47.629234 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 02:09:47.629246 | orchestrator | Sunday 05 April 2026 02:09:38 +0000 (0:00:03.739) 0:00:04.206 ********** 2026-04-05 02:09:47.629258 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-05 02:09:47.629272 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-05 02:09:47.629286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-05 02:09:47.629300 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-05 02:09:47.629314 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 02:09:47.629327 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-05 02:09:47.629340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 02:09:47.629353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 02:09:47.629367 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 02:09:47.629408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-05 02:09:47.629421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 02:09:47.629436 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 02:09:47.629449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-05 02:09:47.629461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 02:09:47.629474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 02:09:47.629486 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 02:09:47.629497 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 02:09:47.629508 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:09:47.629519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-05 02:09:47.629530 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 02:09:47.629541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 02:09:47.629551 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 02:09:47.629562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 02:09:47.629573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 02:09:47.629584 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-05 02:09:47.629594 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:09:47.629605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 02:09:47.629616 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-05 02:09:47.629627 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 02:09:47.629638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 02:09:47.629648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 02:09:47.629659 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 02:09:47.629670 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 02:09:47.629681 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 02:09:47.629716 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 02:09:47.629728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 02:09:47.629739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 02:09:47.629750 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 02:09:47.629761 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:09:47.629771 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 02:09:47.629782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 02:09:47.629793 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 02:09:47.629804 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 02:09:47.629815 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 02:09:47.629826 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 02:09:47.629837 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 02:09:47.629868 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:09:47.629879 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 02:09:47.629890 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 02:09:47.629901 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:09:47.629912 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 02:09:47.629923 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:09:47.629934 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 02:09:47.629944 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 02:09:47.629964 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 02:09:47.629999 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:09:47.630011 | orchestrator | 2026-04-05 02:09:47.630092 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-05 02:09:47.630103 | orchestrator | 2026-04-05 02:09:47.630115 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-05 02:09:47.630126 | orchestrator | Sunday 05 April 2026 02:09:39 +0000 (0:00:00.489) 0:00:04.695 ********** 2026-04-05 02:09:47.630137 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:09:47.630147 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:09:47.630158 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:47.630169 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:47.630180 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:47.630191 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:47.630201 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:09:47.630212 | orchestrator | 2026-04-05 02:09:47.630223 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-05 02:09:47.630234 | orchestrator | Sunday 05 April 2026 02:09:40 +0000 (0:00:01.395) 0:00:06.090 ********** 2026-04-05 02:09:47.630245 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:47.630256 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:09:47.630267 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:09:47.630277 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:09:47.630288 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:09:47.630299 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:09:47.630309 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:09:47.630320 | orchestrator | 2026-04-05 02:09:47.630331 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-05 02:09:47.630342 | orchestrator | Sunday 05 April 2026 02:09:42 +0000 (0:00:01.291) 0:00:07.381 ********** 2026-04-05 02:09:47.630354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:09:47.630368 | orchestrator | 2026-04-05 02:09:47.630379 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-05 02:09:47.630390 | orchestrator | Sunday 05 April 2026 02:09:42 +0000 (0:00:00.339) 0:00:07.721 ********** 2026-04-05 02:09:47.630401 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:47.630412 | orchestrator | changed: [testbed-manager] 2026-04-05 02:09:47.630422 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:47.630433 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:09:47.630444 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:47.630454 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:09:47.630465 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:09:47.630476 | orchestrator | 2026-04-05 02:09:47.630487 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-05 02:09:47.630498 | orchestrator | Sunday 05 April 2026 02:09:44 +0000 (0:00:02.296) 0:00:10.017 ********** 2026-04-05 02:09:47.630508 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:09:47.630521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:09:47.630534 | orchestrator | 2026-04-05 02:09:47.630545 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-05 02:09:47.630556 | orchestrator | Sunday 05 April 2026 02:09:45 +0000 (0:00:00.323) 0:00:10.340 ********** 2026-04-05 02:09:47.630567 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:47.630578 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:47.630588 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:09:47.630599 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:09:47.630610 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:47.630620 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:09:47.630640 | orchestrator | 2026-04-05 02:09:47.630657 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-05 02:09:47.630669 | orchestrator | Sunday 05 April 2026 02:09:46 +0000 (0:00:01.327) 0:00:11.667 ********** 2026-04-05 02:09:47.630679 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:09:47.630708 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:09:47.630720 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:09:47.630731 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:09:47.630742 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:09:47.630752 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:09:47.630763 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:09:47.630774 | orchestrator | 2026-04-05 02:09:47.630784 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-05 02:09:47.630795 | orchestrator | Sunday 05 April 2026 02:09:46 +0000 (0:00:00.601) 0:00:12.269 ********** 2026-04-05 02:09:47.630806 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:09:47.630816 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:09:47.630827 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:09:47.630838 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:09:47.630848 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:09:47.630859 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:09:47.630869 | orchestrator | ok: [testbed-manager] 2026-04-05 02:09:47.630880 | orchestrator | 2026-04-05 02:09:47.630891 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-05 02:09:47.630903 | orchestrator | Sunday 05 April 2026 02:09:47 +0000 (0:00:00.461) 0:00:12.731 ********** 2026-04-05 02:09:47.630915 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:09:47.630925 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:09:47.630944 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:10:02.010107 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:10:02.010194 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:10:02.010201 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:10:02.010205 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:10:02.010211 | orchestrator | 2026-04-05 02:10:02.010219 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-05 02:10:02.010228 | orchestrator | Sunday 05 April 2026 02:09:47 +0000 (0:00:00.272) 0:00:13.003 ********** 2026-04-05 02:10:02.010238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:02.010260 | orchestrator | 2026-04-05 02:10:02.010265 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-05 02:10:02.010271 | orchestrator | Sunday 05 April 2026 02:09:48 +0000 (0:00:00.344) 0:00:13.348 ********** 2026-04-05 02:10:02.010275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:02.010279 | orchestrator | 2026-04-05 02:10:02.010283 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-05 02:10:02.010287 | orchestrator | Sunday 05 April 2026 02:09:48 +0000 (0:00:00.414) 0:00:13.762 ********** 2026-04-05 02:10:02.010291 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010296 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010300 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.010304 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.010308 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010311 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010315 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.010319 | orchestrator | 2026-04-05 02:10:02.010323 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-05 02:10:02.010327 | orchestrator | Sunday 05 April 2026 02:09:50 +0000 (0:00:01.626) 0:00:15.389 ********** 2026-04-05 02:10:02.010348 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:10:02.010352 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:10:02.010356 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:10:02.010360 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:10:02.010363 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:10:02.010367 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:10:02.010371 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:10:02.010375 | orchestrator | 2026-04-05 02:10:02.010379 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-05 02:10:02.010383 | orchestrator | Sunday 05 April 2026 02:09:50 +0000 (0:00:00.339) 0:00:15.729 ********** 2026-04-05 02:10:02.010386 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010390 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010394 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010398 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.010401 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010405 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.010409 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.010412 | orchestrator | 2026-04-05 02:10:02.010416 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-05 02:10:02.010420 | orchestrator | Sunday 05 April 2026 02:09:51 +0000 (0:00:00.589) 0:00:16.319 ********** 2026-04-05 02:10:02.010424 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:10:02.010428 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:10:02.010431 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:10:02.010435 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:10:02.010439 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:10:02.010442 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:10:02.010447 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:10:02.010450 | orchestrator | 2026-04-05 02:10:02.010454 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-05 02:10:02.010459 | orchestrator | Sunday 05 April 2026 02:09:51 +0000 (0:00:00.274) 0:00:16.593 ********** 2026-04-05 02:10:02.010463 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010467 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:10:02.010470 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:02.010474 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:10:02.010478 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:10:02.010481 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:02.010491 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:02.010495 | orchestrator | 2026-04-05 02:10:02.010499 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-05 02:10:02.010503 | orchestrator | Sunday 05 April 2026 02:09:51 +0000 (0:00:00.584) 0:00:17.177 ********** 2026-04-05 02:10:02.010506 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010510 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:10:02.010514 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:10:02.010518 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:10:02.010521 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:02.010525 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:02.010529 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:02.010532 | orchestrator | 2026-04-05 02:10:02.010536 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-05 02:10:02.010540 | orchestrator | Sunday 05 April 2026 02:09:53 +0000 (0:00:01.158) 0:00:18.336 ********** 2026-04-05 02:10:02.010544 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010548 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010551 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010555 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.010559 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.010562 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.010566 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010570 | orchestrator | 2026-04-05 02:10:02.010574 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-05 02:10:02.010581 | orchestrator | Sunday 05 April 2026 02:09:54 +0000 (0:00:01.109) 0:00:19.445 ********** 2026-04-05 02:10:02.010599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:02.010604 | orchestrator | 2026-04-05 02:10:02.010609 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-05 02:10:02.010614 | orchestrator | Sunday 05 April 2026 02:09:54 +0000 (0:00:00.378) 0:00:19.824 ********** 2026-04-05 02:10:02.010618 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:10:02.010623 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:10:02.010627 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:02.010632 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:10:02.010636 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:10:02.010641 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:02.010645 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:02.010650 | orchestrator | 2026-04-05 02:10:02.010655 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-05 02:10:02.010660 | orchestrator | Sunday 05 April 2026 02:09:56 +0000 (0:00:01.567) 0:00:21.391 ********** 2026-04-05 02:10:02.010664 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010669 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010673 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010678 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010682 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.010687 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.010726 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.010731 | orchestrator | 2026-04-05 02:10:02.010736 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-05 02:10:02.010741 | orchestrator | Sunday 05 April 2026 02:09:56 +0000 (0:00:00.271) 0:00:21.663 ********** 2026-04-05 02:10:02.010745 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010750 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010754 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010759 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010764 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.010768 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.010773 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.010777 | orchestrator | 2026-04-05 02:10:02.010781 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-05 02:10:02.010786 | orchestrator | Sunday 05 April 2026 02:09:56 +0000 (0:00:00.271) 0:00:21.935 ********** 2026-04-05 02:10:02.010791 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010795 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010800 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010804 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010809 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.010813 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.010818 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.010822 | orchestrator | 2026-04-05 02:10:02.010827 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-05 02:10:02.010832 | orchestrator | Sunday 05 April 2026 02:09:56 +0000 (0:00:00.273) 0:00:22.208 ********** 2026-04-05 02:10:02.010837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:02.010844 | orchestrator | 2026-04-05 02:10:02.010848 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-05 02:10:02.010853 | orchestrator | Sunday 05 April 2026 02:09:57 +0000 (0:00:00.326) 0:00:22.535 ********** 2026-04-05 02:10:02.010857 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010862 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010870 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010874 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010879 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.010884 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.010888 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.010893 | orchestrator | 2026-04-05 02:10:02.010898 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-05 02:10:02.010902 | orchestrator | Sunday 05 April 2026 02:09:57 +0000 (0:00:00.557) 0:00:23.092 ********** 2026-04-05 02:10:02.010906 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:10:02.010911 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:10:02.010916 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:10:02.010920 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:10:02.010924 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:10:02.010929 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:10:02.010934 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:10:02.010938 | orchestrator | 2026-04-05 02:10:02.010943 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-05 02:10:02.010948 | orchestrator | Sunday 05 April 2026 02:09:58 +0000 (0:00:00.251) 0:00:23.344 ********** 2026-04-05 02:10:02.010953 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010957 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.010962 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.010966 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.010971 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:02.010976 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:02.010980 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:02.010984 | orchestrator | 2026-04-05 02:10:02.010988 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-05 02:10:02.010992 | orchestrator | Sunday 05 April 2026 02:09:59 +0000 (0:00:01.189) 0:00:24.534 ********** 2026-04-05 02:10:02.010996 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.010999 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.011003 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.011007 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:02.011011 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:02.011014 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:02.011018 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:02.011022 | orchestrator | 2026-04-05 02:10:02.011026 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-05 02:10:02.011030 | orchestrator | Sunday 05 April 2026 02:09:59 +0000 (0:00:00.670) 0:00:25.205 ********** 2026-04-05 02:10:02.011033 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:02.011037 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:02.011041 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:02.011049 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:02.011056 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:45.700311 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:45.700433 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.700453 | orchestrator | 2026-04-05 02:10:45.700468 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-05 02:10:45.700484 | orchestrator | Sunday 05 April 2026 02:10:01 +0000 (0:00:02.057) 0:00:27.263 ********** 2026-04-05 02:10:45.700498 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.700512 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.700525 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.700541 | orchestrator | changed: [testbed-manager] 2026-04-05 02:10:45.700555 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:45.700569 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:45.700582 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:45.700596 | orchestrator | 2026-04-05 02:10:45.700610 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-05 02:10:45.700624 | orchestrator | Sunday 05 April 2026 02:10:18 +0000 (0:00:16.288) 0:00:43.551 ********** 2026-04-05 02:10:45.700637 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.700683 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.700722 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.700736 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.700750 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.700762 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.700775 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.700788 | orchestrator | 2026-04-05 02:10:45.700802 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-05 02:10:45.700815 | orchestrator | Sunday 05 April 2026 02:10:18 +0000 (0:00:00.275) 0:00:43.827 ********** 2026-04-05 02:10:45.700828 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.700842 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.700855 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.700869 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.700882 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.700895 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.700909 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.700922 | orchestrator | 2026-04-05 02:10:45.700936 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-05 02:10:45.700950 | orchestrator | Sunday 05 April 2026 02:10:18 +0000 (0:00:00.264) 0:00:44.092 ********** 2026-04-05 02:10:45.700963 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.700976 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.700989 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.701002 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.701016 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.701029 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.701043 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.701056 | orchestrator | 2026-04-05 02:10:45.701070 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-05 02:10:45.701082 | orchestrator | Sunday 05 April 2026 02:10:19 +0000 (0:00:00.276) 0:00:44.368 ********** 2026-04-05 02:10:45.701096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:45.701112 | orchestrator | 2026-04-05 02:10:45.701124 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-05 02:10:45.701136 | orchestrator | Sunday 05 April 2026 02:10:19 +0000 (0:00:00.347) 0:00:44.716 ********** 2026-04-05 02:10:45.701147 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.701159 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.701171 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.701185 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.701199 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.701212 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.701225 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.701238 | orchestrator | 2026-04-05 02:10:45.701251 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-05 02:10:45.701264 | orchestrator | Sunday 05 April 2026 02:10:21 +0000 (0:00:01.769) 0:00:46.486 ********** 2026-04-05 02:10:45.701276 | orchestrator | changed: [testbed-manager] 2026-04-05 02:10:45.701289 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:10:45.701302 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:45.701315 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:45.701328 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:45.701341 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:10:45.701354 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:10:45.701366 | orchestrator | 2026-04-05 02:10:45.701379 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-05 02:10:45.701408 | orchestrator | Sunday 05 April 2026 02:10:22 +0000 (0:00:01.129) 0:00:47.616 ********** 2026-04-05 02:10:45.701422 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.701435 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.701447 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.701470 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.701483 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.701497 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.701510 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.701523 | orchestrator | 2026-04-05 02:10:45.701536 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-05 02:10:45.701550 | orchestrator | Sunday 05 April 2026 02:10:23 +0000 (0:00:00.997) 0:00:48.613 ********** 2026-04-05 02:10:45.701564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:45.701579 | orchestrator | 2026-04-05 02:10:45.701592 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-05 02:10:45.701607 | orchestrator | Sunday 05 April 2026 02:10:23 +0000 (0:00:00.334) 0:00:48.948 ********** 2026-04-05 02:10:45.701620 | orchestrator | changed: [testbed-manager] 2026-04-05 02:10:45.701633 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:10:45.701647 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:10:45.701655 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:45.701663 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:10:45.701671 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:45.701679 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:45.701686 | orchestrator | 2026-04-05 02:10:45.701761 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-05 02:10:45.701772 | orchestrator | Sunday 05 April 2026 02:10:24 +0000 (0:00:01.118) 0:00:50.067 ********** 2026-04-05 02:10:45.701780 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:10:45.701788 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:10:45.701796 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:10:45.701804 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:10:45.701812 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:10:45.701820 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:10:45.701828 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:10:45.701836 | orchestrator | 2026-04-05 02:10:45.701844 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-05 02:10:45.701851 | orchestrator | Sunday 05 April 2026 02:10:25 +0000 (0:00:00.286) 0:00:50.353 ********** 2026-04-05 02:10:45.701860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:45.701868 | orchestrator | 2026-04-05 02:10:45.701876 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-05 02:10:45.701884 | orchestrator | Sunday 05 April 2026 02:10:25 +0000 (0:00:00.335) 0:00:50.689 ********** 2026-04-05 02:10:45.701891 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.701899 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.701907 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.701915 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.701923 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.701931 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.701938 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.701946 | orchestrator | 2026-04-05 02:10:45.701954 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-05 02:10:45.701962 | orchestrator | Sunday 05 April 2026 02:10:27 +0000 (0:00:01.745) 0:00:52.434 ********** 2026-04-05 02:10:45.701970 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:10:45.701978 | orchestrator | changed: [testbed-manager] 2026-04-05 02:10:45.701986 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:10:45.701994 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:45.702001 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:45.702009 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:10:45.702068 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:45.702085 | orchestrator | 2026-04-05 02:10:45.702094 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-05 02:10:45.702102 | orchestrator | Sunday 05 April 2026 02:10:28 +0000 (0:00:01.143) 0:00:53.578 ********** 2026-04-05 02:10:45.702110 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:10:45.702118 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:10:45.702126 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:10:45.702134 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:10:45.702142 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:10:45.702149 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:10:45.702157 | orchestrator | changed: [testbed-manager] 2026-04-05 02:10:45.702165 | orchestrator | 2026-04-05 02:10:45.702203 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-05 02:10:45.702212 | orchestrator | Sunday 05 April 2026 02:10:42 +0000 (0:00:14.409) 0:01:07.987 ********** 2026-04-05 02:10:45.702220 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.702228 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.702236 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.702243 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.702251 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.702259 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.702267 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.702274 | orchestrator | 2026-04-05 02:10:45.702282 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-05 02:10:45.702290 | orchestrator | Sunday 05 April 2026 02:10:43 +0000 (0:00:01.002) 0:01:08.990 ********** 2026-04-05 02:10:45.702298 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.702306 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.702314 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.702322 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.702329 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.702337 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.702345 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.702353 | orchestrator | 2026-04-05 02:10:45.702360 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-05 02:10:45.702369 | orchestrator | Sunday 05 April 2026 02:10:44 +0000 (0:00:01.048) 0:01:10.039 ********** 2026-04-05 02:10:45.702383 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.702391 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.702399 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.702407 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.702414 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.702422 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.702430 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.702438 | orchestrator | 2026-04-05 02:10:45.702446 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-05 02:10:45.702454 | orchestrator | Sunday 05 April 2026 02:10:45 +0000 (0:00:00.284) 0:01:10.323 ********** 2026-04-05 02:10:45.702462 | orchestrator | ok: [testbed-manager] 2026-04-05 02:10:45.702469 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:10:45.702477 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:10:45.702485 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:10:45.702493 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:10:45.702500 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:10:45.702508 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:10:45.702516 | orchestrator | 2026-04-05 02:10:45.702524 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-05 02:10:45.702532 | orchestrator | Sunday 05 April 2026 02:10:45 +0000 (0:00:00.266) 0:01:10.589 ********** 2026-04-05 02:10:45.702540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:10:45.702549 | orchestrator | 2026-04-05 02:10:45.702565 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-05 02:13:31.905807 | orchestrator | Sunday 05 April 2026 02:10:45 +0000 (0:00:00.366) 0:01:10.956 ********** 2026-04-05 02:13:31.905920 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:31.905933 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:31.905945 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:31.905957 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:31.905970 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:31.905983 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:31.905995 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:31.906008 | orchestrator | 2026-04-05 02:13:31.906078 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-05 02:13:31.906092 | orchestrator | Sunday 05 April 2026 02:10:47 +0000 (0:00:01.687) 0:01:12.644 ********** 2026-04-05 02:13:31.906105 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:13:31.906114 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:13:31.906121 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:31.906129 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:13:31.906136 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:13:31.906144 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:13:31.906151 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:13:31.906158 | orchestrator | 2026-04-05 02:13:31.906166 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-05 02:13:31.906174 | orchestrator | Sunday 05 April 2026 02:10:48 +0000 (0:00:00.640) 0:01:13.284 ********** 2026-04-05 02:13:31.906182 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:31.906189 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:31.906196 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:31.906204 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:31.906211 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:31.906218 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:31.906225 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:31.906233 | orchestrator | 2026-04-05 02:13:31.906241 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-05 02:13:31.906248 | orchestrator | Sunday 05 April 2026 02:10:48 +0000 (0:00:00.279) 0:01:13.563 ********** 2026-04-05 02:13:31.906255 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:31.906263 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:31.906270 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:31.906277 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:31.906284 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:31.906291 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:31.906299 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:31.906306 | orchestrator | 2026-04-05 02:13:31.906313 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-05 02:13:31.906320 | orchestrator | Sunday 05 April 2026 02:10:49 +0000 (0:00:01.254) 0:01:14.817 ********** 2026-04-05 02:13:31.906328 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:13:31.906335 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:13:31.906342 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:13:31.906349 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:13:31.906356 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:13:31.906365 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:13:31.906373 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:31.906381 | orchestrator | 2026-04-05 02:13:31.906394 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-05 02:13:31.906403 | orchestrator | Sunday 05 April 2026 02:10:51 +0000 (0:00:01.832) 0:01:16.650 ********** 2026-04-05 02:13:31.906411 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:31.906419 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:31.906428 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:31.906436 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:31.906445 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:31.906454 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:31.906462 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:31.906470 | orchestrator | 2026-04-05 02:13:31.906479 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-05 02:13:31.906509 | orchestrator | Sunday 05 April 2026 02:10:54 +0000 (0:00:02.693) 0:01:19.344 ********** 2026-04-05 02:13:31.906517 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:31.906526 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:31.906535 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:31.906543 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:31.906552 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:31.906559 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:31.906568 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:31.906576 | orchestrator | 2026-04-05 02:13:31.906585 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-05 02:13:31.906593 | orchestrator | Sunday 05 April 2026 02:11:55 +0000 (0:01:01.237) 0:02:20.581 ********** 2026-04-05 02:13:31.906601 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:31.906609 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:13:31.906618 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:13:31.906628 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:13:31.906640 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:13:31.906652 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:13:31.906664 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:13:31.906677 | orchestrator | 2026-04-05 02:13:31.906690 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-05 02:13:31.906703 | orchestrator | Sunday 05 April 2026 02:13:13 +0000 (0:01:18.584) 0:03:39.166 ********** 2026-04-05 02:13:31.906717 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:31.906731 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:31.906744 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:31.906755 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:31.906768 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:31.906779 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:31.906790 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:31.906800 | orchestrator | 2026-04-05 02:13:31.906812 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-05 02:13:31.906824 | orchestrator | Sunday 05 April 2026 02:13:15 +0000 (0:00:01.821) 0:03:40.987 ********** 2026-04-05 02:13:31.906835 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:31.906845 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:31.906852 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:31.906859 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:31.906866 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:31.906873 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:31.906905 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:31.906914 | orchestrator | 2026-04-05 02:13:31.906921 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-05 02:13:31.906928 | orchestrator | Sunday 05 April 2026 02:13:29 +0000 (0:00:13.870) 0:03:54.857 ********** 2026-04-05 02:13:31.906967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-05 02:13:31.906990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-05 02:13:31.907011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-05 02:13:31.907020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 02:13:31.907028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-05 02:13:31.907035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-05 02:13:31.907042 | orchestrator | 2026-04-05 02:13:31.907050 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-05 02:13:31.907057 | orchestrator | Sunday 05 April 2026 02:13:30 +0000 (0:00:00.432) 0:03:55.290 ********** 2026-04-05 02:13:31.907065 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 02:13:31.907072 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 02:13:31.907079 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:31.907087 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 02:13:31.907094 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:13:31.907101 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:13:31.907112 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-05 02:13:31.907119 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:13:31.907127 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 02:13:31.907134 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 02:13:31.907141 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 02:13:31.907148 | orchestrator | 2026-04-05 02:13:31.907156 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-05 02:13:31.907163 | orchestrator | Sunday 05 April 2026 02:13:31 +0000 (0:00:01.758) 0:03:57.049 ********** 2026-04-05 02:13:31.907170 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 02:13:31.907178 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 02:13:31.907185 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 02:13:31.907193 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 02:13:31.907200 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 02:13:31.907212 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 02:13:39.554439 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 02:13:39.554569 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 02:13:39.554609 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 02:13:39.554623 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 02:13:39.554634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 02:13:39.554645 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 02:13:39.554656 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 02:13:39.554667 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 02:13:39.554678 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 02:13:39.554689 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 02:13:39.554700 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 02:13:39.554711 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 02:13:39.554721 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 02:13:39.554732 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 02:13:39.554743 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 02:13:39.554754 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 02:13:39.554764 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 02:13:39.554775 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 02:13:39.554786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 02:13:39.554797 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 02:13:39.554808 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 02:13:39.554819 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 02:13:39.554829 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-05 02:13:39.554840 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 02:13:39.554851 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 02:13:39.554862 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:39.554874 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-05 02:13:39.554885 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-05 02:13:39.554921 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-05 02:13:39.554932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-05 02:13:39.554958 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-05 02:13:39.554972 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-05 02:13:39.554985 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-05 02:13:39.554997 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-05 02:13:39.555010 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-05 02:13:39.555030 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:13:39.555042 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:13:39.555053 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:13:39.555064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 02:13:39.555074 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 02:13:39.555085 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-05 02:13:39.555096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 02:13:39.555107 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 02:13:39.555137 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-05 02:13:39.555149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 02:13:39.555160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 02:13:39.555171 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-05 02:13:39.555181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 02:13:39.555192 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 02:13:39.555203 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-05 02:13:39.555214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 02:13:39.555224 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 02:13:39.555235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-05 02:13:39.555246 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 02:13:39.555256 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 02:13:39.555267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 02:13:39.555285 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 02:13:39.555304 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 02:13:39.555336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 02:13:39.555355 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-05 02:13:39.555373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 02:13:39.555392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 02:13:39.555410 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-05 02:13:39.555429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 02:13:39.555447 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 02:13:39.555466 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-05 02:13:39.555486 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-05 02:13:39.555507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-05 02:13:39.555538 | orchestrator | 2026-04-05 02:13:39.555551 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-05 02:13:39.555562 | orchestrator | Sunday 05 April 2026 02:13:36 +0000 (0:00:04.679) 0:04:01.728 ********** 2026-04-05 02:13:39.555573 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 02:13:39.555584 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 02:13:39.555595 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 02:13:39.555605 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 02:13:39.555623 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 02:13:39.555642 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 02:13:39.555659 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-05 02:13:39.555677 | orchestrator | 2026-04-05 02:13:39.555697 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-05 02:13:39.555715 | orchestrator | Sunday 05 April 2026 02:13:37 +0000 (0:00:00.620) 0:04:02.349 ********** 2026-04-05 02:13:39.555729 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:39.555740 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:39.555751 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:39.555762 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:39.555773 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:13:39.555783 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:13:39.555794 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:39.555805 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:13:39.555816 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 02:13:39.555833 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 02:13:39.555864 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 02:13:54.021506 | orchestrator | 2026-04-05 02:13:54.021628 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-05 02:13:54.021644 | orchestrator | Sunday 05 April 2026 02:13:39 +0000 (0:00:02.465) 0:04:04.814 ********** 2026-04-05 02:13:54.021655 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:54.021667 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:54.021677 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:54.021687 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:13:54.021698 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:54.021744 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-05 02:13:54.021756 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:13:54.021767 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:13:54.021777 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 02:13:54.021787 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 02:13:54.021797 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-05 02:13:54.021807 | orchestrator | 2026-04-05 02:13:54.021817 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-05 02:13:54.021852 | orchestrator | Sunday 05 April 2026 02:13:40 +0000 (0:00:00.604) 0:04:05.419 ********** 2026-04-05 02:13:54.021863 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 02:13:54.021873 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:54.021883 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 02:13:54.021893 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:13:54.021903 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 02:13:54.021913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:13:54.021958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-05 02:13:54.021973 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:13:54.021983 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 02:13:54.021993 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 02:13:54.022003 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-05 02:13:54.022064 | orchestrator | 2026-04-05 02:13:54.022078 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-05 02:13:54.022090 | orchestrator | Sunday 05 April 2026 02:13:41 +0000 (0:00:01.667) 0:04:07.086 ********** 2026-04-05 02:13:54.022101 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:54.022113 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:13:54.022124 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:13:54.022135 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:13:54.022147 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:13:54.022158 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:13:54.022169 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:13:54.022180 | orchestrator | 2026-04-05 02:13:54.022192 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-05 02:13:54.022203 | orchestrator | Sunday 05 April 2026 02:13:42 +0000 (0:00:00.311) 0:04:07.397 ********** 2026-04-05 02:13:54.022215 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:54.022227 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:54.022238 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:54.022250 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:54.022261 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:54.022272 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:54.022284 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:54.022295 | orchestrator | 2026-04-05 02:13:54.022307 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-05 02:13:54.022319 | orchestrator | Sunday 05 April 2026 02:13:47 +0000 (0:00:05.839) 0:04:13.237 ********** 2026-04-05 02:13:54.022330 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-05 02:13:54.022342 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:54.022364 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-05 02:13:54.022376 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-05 02:13:54.022387 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:13:54.022399 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-05 02:13:54.022411 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:13:54.022421 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-05 02:13:54.022434 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:13:54.022449 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:13:54.022490 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-05 02:13:54.022515 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:13:54.022531 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-05 02:13:54.022547 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:13:54.022563 | orchestrator | 2026-04-05 02:13:54.022592 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-05 02:13:54.022609 | orchestrator | Sunday 05 April 2026 02:13:48 +0000 (0:00:00.359) 0:04:13.597 ********** 2026-04-05 02:13:54.022626 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-05 02:13:54.022642 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-05 02:13:54.022659 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-05 02:13:54.022689 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-05 02:13:54.022699 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-05 02:13:54.022709 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-05 02:13:54.022718 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-05 02:13:54.022728 | orchestrator | 2026-04-05 02:13:54.022738 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-05 02:13:54.022747 | orchestrator | Sunday 05 April 2026 02:13:49 +0000 (0:00:01.128) 0:04:14.725 ********** 2026-04-05 02:13:54.022759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:13:54.022772 | orchestrator | 2026-04-05 02:13:54.022781 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-05 02:13:54.022791 | orchestrator | Sunday 05 April 2026 02:13:49 +0000 (0:00:00.421) 0:04:15.146 ********** 2026-04-05 02:13:54.022801 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:54.022811 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:54.022820 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:54.022830 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:54.022839 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:54.022849 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:54.022858 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:54.022868 | orchestrator | 2026-04-05 02:13:54.022877 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-05 02:13:54.022887 | orchestrator | Sunday 05 April 2026 02:13:51 +0000 (0:00:01.239) 0:04:16.386 ********** 2026-04-05 02:13:54.022897 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:54.022906 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:54.022967 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:54.022979 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:54.022989 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:54.022998 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:54.023008 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:54.023017 | orchestrator | 2026-04-05 02:13:54.023027 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-05 02:13:54.023037 | orchestrator | Sunday 05 April 2026 02:13:51 +0000 (0:00:00.622) 0:04:17.009 ********** 2026-04-05 02:13:54.023047 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:54.023056 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:13:54.023066 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:13:54.023076 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:13:54.023085 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:13:54.023095 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:13:54.023104 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:13:54.023114 | orchestrator | 2026-04-05 02:13:54.023124 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-05 02:13:54.023133 | orchestrator | Sunday 05 April 2026 02:13:52 +0000 (0:00:00.641) 0:04:17.651 ********** 2026-04-05 02:13:54.023143 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:54.023153 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:54.023162 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:54.023172 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:54.023181 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:54.023191 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:54.023200 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:54.023210 | orchestrator | 2026-04-05 02:13:54.023219 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-05 02:13:54.023237 | orchestrator | Sunday 05 April 2026 02:13:53 +0000 (0:00:00.629) 0:04:18.280 ********** 2026-04-05 02:13:54.023257 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775353803.6583755, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:54.023271 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775353772.5083642, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:54.023281 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775353811.0350254, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:54.023313 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775353799.970162, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171086 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775353812.4790983, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171197 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775353809.4602282, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171214 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775353803.3808854, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171250 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171277 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171289 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171301 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171341 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171354 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171365 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 02:13:59.171386 | orchestrator | 2026-04-05 02:13:59.171400 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-05 02:13:59.171412 | orchestrator | Sunday 05 April 2026 02:13:54 +0000 (0:00:00.993) 0:04:19.273 ********** 2026-04-05 02:13:59.171423 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:59.171435 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:13:59.171445 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:13:59.171456 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:13:59.171467 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:13:59.171478 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:13:59.171489 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:13:59.171500 | orchestrator | 2026-04-05 02:13:59.171511 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-05 02:13:59.171522 | orchestrator | Sunday 05 April 2026 02:13:55 +0000 (0:00:01.153) 0:04:20.427 ********** 2026-04-05 02:13:59.171534 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:59.171554 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:13:59.171574 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:13:59.171593 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:13:59.171612 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:13:59.171632 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:13:59.171651 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:13:59.171669 | orchestrator | 2026-04-05 02:13:59.171697 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-05 02:13:59.171718 | orchestrator | Sunday 05 April 2026 02:13:56 +0000 (0:00:01.207) 0:04:21.634 ********** 2026-04-05 02:13:59.171738 | orchestrator | changed: [testbed-manager] 2026-04-05 02:13:59.171758 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:13:59.171778 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:13:59.171793 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:13:59.171804 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:13:59.171814 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:13:59.171825 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:13:59.171835 | orchestrator | 2026-04-05 02:13:59.171846 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-05 02:13:59.171857 | orchestrator | Sunday 05 April 2026 02:13:57 +0000 (0:00:01.236) 0:04:22.871 ********** 2026-04-05 02:13:59.171867 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:13:59.171878 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:13:59.171888 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:13:59.171899 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:13:59.171909 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:13:59.171919 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:13:59.171968 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:13:59.171979 | orchestrator | 2026-04-05 02:13:59.171990 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-05 02:13:59.172001 | orchestrator | Sunday 05 April 2026 02:13:57 +0000 (0:00:00.335) 0:04:23.207 ********** 2026-04-05 02:13:59.172012 | orchestrator | ok: [testbed-manager] 2026-04-05 02:13:59.172023 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:13:59.172034 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:13:59.172045 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:13:59.172055 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:13:59.172066 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:13:59.172076 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:13:59.172086 | orchestrator | 2026-04-05 02:13:59.172097 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-05 02:13:59.172108 | orchestrator | Sunday 05 April 2026 02:13:58 +0000 (0:00:00.793) 0:04:24.000 ********** 2026-04-05 02:13:59.172121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:13:59.172144 | orchestrator | 2026-04-05 02:13:59.172155 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-05 02:13:59.172176 | orchestrator | Sunday 05 April 2026 02:13:59 +0000 (0:00:00.424) 0:04:24.425 ********** 2026-04-05 02:15:18.890371 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.890475 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:15:18.890493 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:15:18.890504 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:15:18.890514 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:15:18.890524 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:15:18.890533 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:15:18.890544 | orchestrator | 2026-04-05 02:15:18.890555 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-05 02:15:18.890567 | orchestrator | Sunday 05 April 2026 02:14:07 +0000 (0:00:08.682) 0:04:33.107 ********** 2026-04-05 02:15:18.890577 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.890587 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:18.890597 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:18.890606 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:18.890616 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:18.890626 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:18.890636 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:18.890645 | orchestrator | 2026-04-05 02:15:18.890656 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-05 02:15:18.890666 | orchestrator | Sunday 05 April 2026 02:14:09 +0000 (0:00:01.357) 0:04:34.465 ********** 2026-04-05 02:15:18.890675 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.890685 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:18.890695 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:18.890704 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:18.890714 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:18.890723 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:18.890756 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:18.890771 | orchestrator | 2026-04-05 02:15:18.890788 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-05 02:15:18.890806 | orchestrator | Sunday 05 April 2026 02:14:10 +0000 (0:00:01.333) 0:04:35.798 ********** 2026-04-05 02:15:18.890823 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.890840 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:18.890856 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:18.890870 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:18.890881 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:18.890890 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:18.890900 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:18.890911 | orchestrator | 2026-04-05 02:15:18.890923 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-05 02:15:18.890935 | orchestrator | Sunday 05 April 2026 02:14:10 +0000 (0:00:00.309) 0:04:36.107 ********** 2026-04-05 02:15:18.890946 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.890962 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:18.890978 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:18.890994 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:18.891010 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:18.891026 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:18.891072 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:18.891088 | orchestrator | 2026-04-05 02:15:18.891102 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-05 02:15:18.891116 | orchestrator | Sunday 05 April 2026 02:14:11 +0000 (0:00:00.378) 0:04:36.485 ********** 2026-04-05 02:15:18.891130 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.891147 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:18.891163 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:18.891208 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:18.891219 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:18.891229 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:18.891238 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:18.891247 | orchestrator | 2026-04-05 02:15:18.891257 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-05 02:15:18.891267 | orchestrator | Sunday 05 April 2026 02:14:11 +0000 (0:00:00.330) 0:04:36.816 ********** 2026-04-05 02:15:18.891277 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.891286 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:18.891296 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:18.891305 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:18.891315 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:18.891324 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:18.891334 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:18.891343 | orchestrator | 2026-04-05 02:15:18.891353 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-05 02:15:18.891363 | orchestrator | Sunday 05 April 2026 02:14:18 +0000 (0:00:06.708) 0:04:43.524 ********** 2026-04-05 02:15:18.891375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:15:18.891388 | orchestrator | 2026-04-05 02:15:18.891398 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-05 02:15:18.891407 | orchestrator | Sunday 05 April 2026 02:14:18 +0000 (0:00:00.471) 0:04:43.995 ********** 2026-04-05 02:15:18.891417 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-05 02:15:18.891426 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-05 02:15:18.891436 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-05 02:15:18.891445 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:18.891455 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-05 02:15:18.891481 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-05 02:15:18.891491 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:18.891501 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-05 02:15:18.891510 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-05 02:15:18.891520 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-05 02:15:18.891529 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:18.891539 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:15:18.891548 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-05 02:15:18.891558 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-05 02:15:18.891567 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-05 02:15:18.891577 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:15:18.891605 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-05 02:15:18.891615 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:15:18.891625 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-05 02:15:18.891634 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-05 02:15:18.891643 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:15:18.891653 | orchestrator | 2026-04-05 02:15:18.891662 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-05 02:15:18.891672 | orchestrator | Sunday 05 April 2026 02:14:19 +0000 (0:00:00.374) 0:04:44.370 ********** 2026-04-05 02:15:18.891682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:15:18.891692 | orchestrator | 2026-04-05 02:15:18.891702 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-05 02:15:18.891719 | orchestrator | Sunday 05 April 2026 02:14:19 +0000 (0:00:00.406) 0:04:44.777 ********** 2026-04-05 02:15:18.891730 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-05 02:15:18.891739 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:18.891748 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-05 02:15:18.891758 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:18.891767 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-05 02:15:18.891780 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:18.891796 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-05 02:15:18.891812 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-05 02:15:18.891828 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:15:18.891845 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-05 02:15:18.891862 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:15:18.891879 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:15:18.891894 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-05 02:15:18.891911 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:15:18.891921 | orchestrator | 2026-04-05 02:15:18.891931 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-05 02:15:18.891941 | orchestrator | Sunday 05 April 2026 02:14:19 +0000 (0:00:00.338) 0:04:45.116 ********** 2026-04-05 02:15:18.891952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:15:18.891968 | orchestrator | 2026-04-05 02:15:18.891984 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-05 02:15:18.892000 | orchestrator | Sunday 05 April 2026 02:14:20 +0000 (0:00:00.464) 0:04:45.580 ********** 2026-04-05 02:15:18.892016 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:15:18.892061 | orchestrator | changed: [testbed-manager] 2026-04-05 02:15:18.892079 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:15:18.892090 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:15:18.892106 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:15:18.892116 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:15:18.892126 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:15:18.892135 | orchestrator | 2026-04-05 02:15:18.892145 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-05 02:15:18.892155 | orchestrator | Sunday 05 April 2026 02:14:55 +0000 (0:00:34.763) 0:05:20.343 ********** 2026-04-05 02:15:18.892165 | orchestrator | changed: [testbed-manager] 2026-04-05 02:15:18.892175 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:15:18.892235 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:15:18.892246 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:15:18.892256 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:15:18.892266 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:15:18.892275 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:15:18.892285 | orchestrator | 2026-04-05 02:15:18.892299 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-05 02:15:18.892316 | orchestrator | Sunday 05 April 2026 02:15:03 +0000 (0:00:08.210) 0:05:28.554 ********** 2026-04-05 02:15:18.892333 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:15:18.892349 | orchestrator | changed: [testbed-manager] 2026-04-05 02:15:18.892359 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:15:18.892369 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:15:18.892378 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:15:18.892388 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:15:18.892397 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:15:18.892407 | orchestrator | 2026-04-05 02:15:18.892417 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-05 02:15:18.892435 | orchestrator | Sunday 05 April 2026 02:15:10 +0000 (0:00:07.518) 0:05:36.072 ********** 2026-04-05 02:15:18.892445 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:18.892455 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:18.892471 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:18.892493 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:18.892518 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:18.892532 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:18.892545 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:18.892560 | orchestrator | 2026-04-05 02:15:18.892574 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-05 02:15:18.892589 | orchestrator | Sunday 05 April 2026 02:15:12 +0000 (0:00:01.787) 0:05:37.859 ********** 2026-04-05 02:15:18.892604 | orchestrator | changed: [testbed-manager] 2026-04-05 02:15:18.892618 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:15:18.892633 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:15:18.892647 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:15:18.892662 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:15:18.892677 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:15:18.892695 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:15:18.892711 | orchestrator | 2026-04-05 02:15:18.892740 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-05 02:15:30.337750 | orchestrator | Sunday 05 April 2026 02:15:18 +0000 (0:00:06.278) 0:05:44.137 ********** 2026-04-05 02:15:30.337832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:15:30.337842 | orchestrator | 2026-04-05 02:15:30.337849 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-05 02:15:30.337855 | orchestrator | Sunday 05 April 2026 02:15:19 +0000 (0:00:00.450) 0:05:44.588 ********** 2026-04-05 02:15:30.337861 | orchestrator | changed: [testbed-manager] 2026-04-05 02:15:30.337868 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:15:30.337873 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:15:30.337878 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:15:30.337883 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:15:30.337888 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:15:30.337894 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:15:30.337899 | orchestrator | 2026-04-05 02:15:30.337905 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-05 02:15:30.337910 | orchestrator | Sunday 05 April 2026 02:15:20 +0000 (0:00:00.737) 0:05:45.326 ********** 2026-04-05 02:15:30.337915 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:30.337921 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:30.337927 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:30.337932 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:30.337937 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:30.337942 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:30.337947 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:30.337952 | orchestrator | 2026-04-05 02:15:30.337958 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-05 02:15:30.337963 | orchestrator | Sunday 05 April 2026 02:15:21 +0000 (0:00:01.741) 0:05:47.068 ********** 2026-04-05 02:15:30.337968 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:15:30.337973 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:15:30.337979 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:15:30.337984 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:15:30.337989 | orchestrator | changed: [testbed-manager] 2026-04-05 02:15:30.337994 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:15:30.338000 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:15:30.338005 | orchestrator | 2026-04-05 02:15:30.338010 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-05 02:15:30.338106 | orchestrator | Sunday 05 April 2026 02:15:22 +0000 (0:00:00.784) 0:05:47.853 ********** 2026-04-05 02:15:30.338136 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:30.338143 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:30.338148 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:30.338153 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:15:30.338158 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:15:30.338163 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:15:30.338168 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:15:30.338173 | orchestrator | 2026-04-05 02:15:30.338178 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-05 02:15:30.338183 | orchestrator | Sunday 05 April 2026 02:15:22 +0000 (0:00:00.302) 0:05:48.155 ********** 2026-04-05 02:15:30.338188 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:30.338193 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:30.338198 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:30.338213 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:15:30.338218 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:15:30.338224 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:15:30.338229 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:15:30.338233 | orchestrator | 2026-04-05 02:15:30.338239 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-05 02:15:30.338247 | orchestrator | Sunday 05 April 2026 02:15:23 +0000 (0:00:00.437) 0:05:48.593 ********** 2026-04-05 02:15:30.338255 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:30.338262 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:30.338267 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:30.338272 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:30.338278 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:30.338286 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:30.338293 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:30.338298 | orchestrator | 2026-04-05 02:15:30.338303 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-05 02:15:30.338308 | orchestrator | Sunday 05 April 2026 02:15:23 +0000 (0:00:00.308) 0:05:48.901 ********** 2026-04-05 02:15:30.338313 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:30.338318 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:30.338325 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:30.338330 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:15:30.338336 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:15:30.338342 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:15:30.338348 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:15:30.338354 | orchestrator | 2026-04-05 02:15:30.338360 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-05 02:15:30.338367 | orchestrator | Sunday 05 April 2026 02:15:23 +0000 (0:00:00.315) 0:05:49.216 ********** 2026-04-05 02:15:30.338373 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:30.338379 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:30.338385 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:30.338390 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:30.338396 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:30.338402 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:30.338407 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:30.338413 | orchestrator | 2026-04-05 02:15:30.338419 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-05 02:15:30.338425 | orchestrator | Sunday 05 April 2026 02:15:24 +0000 (0:00:00.342) 0:05:49.559 ********** 2026-04-05 02:15:30.338431 | orchestrator | ok: [testbed-manager] =>  2026-04-05 02:15:30.338437 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 02:15:30.338443 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 02:15:30.338448 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 02:15:30.338454 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 02:15:30.338460 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 02:15:30.338466 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 02:15:30.338471 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 02:15:30.338497 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 02:15:30.338504 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 02:15:30.338510 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 02:15:30.338516 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 02:15:30.338521 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 02:15:30.338527 | orchestrator |  docker_version: 5:27.5.1 2026-04-05 02:15:30.338533 | orchestrator | 2026-04-05 02:15:30.338539 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-05 02:15:30.338545 | orchestrator | Sunday 05 April 2026 02:15:24 +0000 (0:00:00.322) 0:05:49.882 ********** 2026-04-05 02:15:30.338551 | orchestrator | ok: [testbed-manager] =>  2026-04-05 02:15:30.338557 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 02:15:30.338563 | orchestrator | ok: [testbed-node-3] =>  2026-04-05 02:15:30.338568 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 02:15:30.338574 | orchestrator | ok: [testbed-node-4] =>  2026-04-05 02:15:30.338580 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 02:15:30.338586 | orchestrator | ok: [testbed-node-5] =>  2026-04-05 02:15:30.338592 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 02:15:30.338597 | orchestrator | ok: [testbed-node-0] =>  2026-04-05 02:15:30.338603 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 02:15:30.338609 | orchestrator | ok: [testbed-node-1] =>  2026-04-05 02:15:30.338615 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 02:15:30.338621 | orchestrator | ok: [testbed-node-2] =>  2026-04-05 02:15:30.338626 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-05 02:15:30.338635 | orchestrator | 2026-04-05 02:15:30.338643 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-05 02:15:30.338651 | orchestrator | Sunday 05 April 2026 02:15:24 +0000 (0:00:00.329) 0:05:50.212 ********** 2026-04-05 02:15:30.338659 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:30.338667 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:30.338675 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:30.338683 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:15:30.338691 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:15:30.338698 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:15:30.338706 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:15:30.338714 | orchestrator | 2026-04-05 02:15:30.338721 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-05 02:15:30.338729 | orchestrator | Sunday 05 April 2026 02:15:25 +0000 (0:00:00.291) 0:05:50.503 ********** 2026-04-05 02:15:30.338736 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:30.338744 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:30.338751 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:30.338760 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:15:30.338768 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:15:30.338776 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:15:30.338785 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:15:30.338793 | orchestrator | 2026-04-05 02:15:30.338801 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-05 02:15:30.338810 | orchestrator | Sunday 05 April 2026 02:15:25 +0000 (0:00:00.271) 0:05:50.775 ********** 2026-04-05 02:15:30.338819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:15:30.338826 | orchestrator | 2026-04-05 02:15:30.338836 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-05 02:15:30.338842 | orchestrator | Sunday 05 April 2026 02:15:25 +0000 (0:00:00.442) 0:05:51.217 ********** 2026-04-05 02:15:30.338850 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:30.338858 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:30.338867 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:30.338875 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:30.338883 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:30.338899 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:30.338908 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:30.338917 | orchestrator | 2026-04-05 02:15:30.338925 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-05 02:15:30.338934 | orchestrator | Sunday 05 April 2026 02:15:26 +0000 (0:00:00.956) 0:05:52.173 ********** 2026-04-05 02:15:30.338942 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:15:30.338950 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:15:30.338958 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:15:30.338966 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:15:30.338973 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:15:30.338978 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:15:30.338983 | orchestrator | ok: [testbed-manager] 2026-04-05 02:15:30.338988 | orchestrator | 2026-04-05 02:15:30.338993 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-05 02:15:30.338999 | orchestrator | Sunday 05 April 2026 02:15:29 +0000 (0:00:02.998) 0:05:55.172 ********** 2026-04-05 02:15:30.339004 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-05 02:15:30.339010 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-05 02:15:30.339015 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-05 02:15:30.339020 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:15:30.339025 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-05 02:15:30.339030 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-05 02:15:30.339035 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-05 02:15:30.339040 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-05 02:15:30.339045 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-05 02:15:30.339096 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-05 02:15:30.339105 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:15:30.339113 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-05 02:15:30.339122 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-05 02:15:30.339129 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-05 02:15:30.339137 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:15:30.339145 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-05 02:15:30.339161 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-05 02:16:32.366531 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-05 02:16:32.366631 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:32.366644 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-05 02:16:32.366653 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-05 02:16:32.366661 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:32.366669 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-05 02:16:32.366677 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:32.366686 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-05 02:16:32.366694 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-05 02:16:32.366702 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-05 02:16:32.366709 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:32.366718 | orchestrator | 2026-04-05 02:16:32.366727 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-05 02:16:32.366736 | orchestrator | Sunday 05 April 2026 02:15:30 +0000 (0:00:00.644) 0:05:55.817 ********** 2026-04-05 02:16:32.366744 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.366752 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.366760 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.366768 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.366776 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.366784 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.366814 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.366823 | orchestrator | 2026-04-05 02:16:32.366831 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-05 02:16:32.366839 | orchestrator | Sunday 05 April 2026 02:15:37 +0000 (0:00:06.881) 0:06:02.698 ********** 2026-04-05 02:16:32.366846 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.366854 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.366862 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.366869 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.366877 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.366885 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.366893 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.366900 | orchestrator | 2026-04-05 02:16:32.366908 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-05 02:16:32.366918 | orchestrator | Sunday 05 April 2026 02:15:38 +0000 (0:00:01.084) 0:06:03.783 ********** 2026-04-05 02:16:32.366931 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.366944 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.366957 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.366970 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.366982 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.366995 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367006 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367019 | orchestrator | 2026-04-05 02:16:32.367032 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-05 02:16:32.367044 | orchestrator | Sunday 05 April 2026 02:15:46 +0000 (0:00:08.303) 0:06:12.086 ********** 2026-04-05 02:16:32.367056 | orchestrator | changed: [testbed-manager] 2026-04-05 02:16:32.367069 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.367082 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.367095 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367109 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.367182 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.367198 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367209 | orchestrator | 2026-04-05 02:16:32.367218 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-05 02:16:32.367226 | orchestrator | Sunday 05 April 2026 02:15:50 +0000 (0:00:03.315) 0:06:15.402 ********** 2026-04-05 02:16:32.367234 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.367242 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.367250 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367258 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.367265 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.367273 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.367281 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367289 | orchestrator | 2026-04-05 02:16:32.367297 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-05 02:16:32.367305 | orchestrator | Sunday 05 April 2026 02:15:51 +0000 (0:00:01.337) 0:06:16.740 ********** 2026-04-05 02:16:32.367312 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.367320 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.367328 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367336 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.367343 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.367351 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.367359 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367367 | orchestrator | 2026-04-05 02:16:32.367375 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-05 02:16:32.367383 | orchestrator | Sunday 05 April 2026 02:15:53 +0000 (0:00:01.671) 0:06:18.411 ********** 2026-04-05 02:16:32.367390 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:16:32.367398 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:16:32.367406 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:32.367413 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:32.367430 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:32.367438 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:32.367446 | orchestrator | changed: [testbed-manager] 2026-04-05 02:16:32.367453 | orchestrator | 2026-04-05 02:16:32.367461 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-05 02:16:32.367469 | orchestrator | Sunday 05 April 2026 02:15:53 +0000 (0:00:00.700) 0:06:19.112 ********** 2026-04-05 02:16:32.367477 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.367485 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.367492 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.367500 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.367508 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.367516 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367529 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367540 | orchestrator | 2026-04-05 02:16:32.367552 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-05 02:16:32.367585 | orchestrator | Sunday 05 April 2026 02:16:03 +0000 (0:00:09.859) 0:06:28.971 ********** 2026-04-05 02:16:32.367600 | orchestrator | changed: [testbed-manager] 2026-04-05 02:16:32.367608 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.367616 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367623 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.367631 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.367639 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.367646 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367654 | orchestrator | 2026-04-05 02:16:32.367662 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-05 02:16:32.367670 | orchestrator | Sunday 05 April 2026 02:16:04 +0000 (0:00:00.947) 0:06:29.918 ********** 2026-04-05 02:16:32.367678 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.367686 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.367693 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.367701 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.367709 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367716 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.367724 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367732 | orchestrator | 2026-04-05 02:16:32.367739 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-05 02:16:32.367748 | orchestrator | Sunday 05 April 2026 02:16:13 +0000 (0:00:09.010) 0:06:38.929 ********** 2026-04-05 02:16:32.367761 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.367774 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.367786 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.367798 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.367810 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.367822 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.367835 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.367847 | orchestrator | 2026-04-05 02:16:32.367861 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-05 02:16:32.367874 | orchestrator | Sunday 05 April 2026 02:16:25 +0000 (0:00:11.501) 0:06:50.430 ********** 2026-04-05 02:16:32.367888 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-05 02:16:32.367897 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-05 02:16:32.367905 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-05 02:16:32.367913 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-05 02:16:32.367921 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-05 02:16:32.367929 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-05 02:16:32.367936 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-05 02:16:32.367944 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-05 02:16:32.367952 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-05 02:16:32.367967 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-05 02:16:32.367975 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-05 02:16:32.368025 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-05 02:16:32.368034 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-05 02:16:32.368042 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-05 02:16:32.368050 | orchestrator | 2026-04-05 02:16:32.368058 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-05 02:16:32.368065 | orchestrator | Sunday 05 April 2026 02:16:26 +0000 (0:00:01.247) 0:06:51.678 ********** 2026-04-05 02:16:32.368077 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:16:32.368085 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:16:32.368093 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:16:32.368101 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:32.368109 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:32.368116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:32.368151 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:32.368160 | orchestrator | 2026-04-05 02:16:32.368168 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-05 02:16:32.368176 | orchestrator | Sunday 05 April 2026 02:16:27 +0000 (0:00:00.675) 0:06:52.353 ********** 2026-04-05 02:16:32.368184 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:32.368192 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:32.368200 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:32.368208 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:32.368216 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:32.368224 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:32.368231 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:32.368239 | orchestrator | 2026-04-05 02:16:32.368247 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-05 02:16:32.368256 | orchestrator | Sunday 05 April 2026 02:16:31 +0000 (0:00:04.207) 0:06:56.561 ********** 2026-04-05 02:16:32.368264 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:16:32.368272 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:16:32.368280 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:16:32.368288 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:32.368295 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:32.368303 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:32.368311 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:32.368319 | orchestrator | 2026-04-05 02:16:32.368327 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-05 02:16:32.368336 | orchestrator | Sunday 05 April 2026 02:16:31 +0000 (0:00:00.545) 0:06:57.106 ********** 2026-04-05 02:16:32.368344 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-05 02:16:32.368351 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-05 02:16:32.368359 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:16:32.368367 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-05 02:16:32.368375 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-05 02:16:32.368383 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:16:32.368390 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-05 02:16:32.368398 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-05 02:16:32.368406 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:16:32.368422 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-05 02:16:53.316049 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-05 02:16:53.316191 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:53.316209 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-05 02:16:53.316221 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-05 02:16:53.316232 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:53.316271 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-05 02:16:53.316283 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-05 02:16:53.316295 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:53.316306 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-05 02:16:53.316317 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-05 02:16:53.316328 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:53.316339 | orchestrator | 2026-04-05 02:16:53.316352 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-05 02:16:53.316364 | orchestrator | Sunday 05 April 2026 02:16:32 +0000 (0:00:00.812) 0:06:57.919 ********** 2026-04-05 02:16:53.316375 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:16:53.316386 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:16:53.316397 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:16:53.316407 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:53.316418 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:53.316429 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:53.316439 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:53.316450 | orchestrator | 2026-04-05 02:16:53.316461 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-05 02:16:53.316472 | orchestrator | Sunday 05 April 2026 02:16:33 +0000 (0:00:00.571) 0:06:58.490 ********** 2026-04-05 02:16:53.316483 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:16:53.316494 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:16:53.316513 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:16:53.316532 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:53.316550 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:53.316572 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:53.316590 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:53.316604 | orchestrator | 2026-04-05 02:16:53.316617 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-05 02:16:53.316629 | orchestrator | Sunday 05 April 2026 02:16:33 +0000 (0:00:00.580) 0:06:59.071 ********** 2026-04-05 02:16:53.316642 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:16:53.316654 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:16:53.316667 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:16:53.316679 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:16:53.316692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:16:53.316704 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:16:53.316717 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:16:53.316730 | orchestrator | 2026-04-05 02:16:53.316744 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-05 02:16:53.316756 | orchestrator | Sunday 05 April 2026 02:16:34 +0000 (0:00:00.634) 0:06:59.705 ********** 2026-04-05 02:16:53.316769 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.316781 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:16:53.316794 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:16:53.316806 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:16:53.316825 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:16:53.316843 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:16:53.316862 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:16:53.316882 | orchestrator | 2026-04-05 02:16:53.316902 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-05 02:16:53.316922 | orchestrator | Sunday 05 April 2026 02:16:36 +0000 (0:00:02.160) 0:07:01.866 ********** 2026-04-05 02:16:53.316940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:16:53.316954 | orchestrator | 2026-04-05 02:16:53.316966 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-05 02:16:53.316977 | orchestrator | Sunday 05 April 2026 02:16:37 +0000 (0:00:00.940) 0:07:02.807 ********** 2026-04-05 02:16:53.317005 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.317016 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:53.317027 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:53.317037 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:53.317048 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:53.317059 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:53.317069 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:53.317080 | orchestrator | 2026-04-05 02:16:53.317091 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-05 02:16:53.317109 | orchestrator | Sunday 05 April 2026 02:16:38 +0000 (0:00:00.880) 0:07:03.688 ********** 2026-04-05 02:16:53.317126 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.317166 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:53.317186 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:53.317204 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:53.317224 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:53.317242 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:53.317253 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:53.317264 | orchestrator | 2026-04-05 02:16:53.317275 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-05 02:16:53.317286 | orchestrator | Sunday 05 April 2026 02:16:39 +0000 (0:00:00.921) 0:07:04.609 ********** 2026-04-05 02:16:53.317297 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.317307 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:53.317318 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:53.317333 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:53.317351 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:53.317371 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:53.317390 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:53.317401 | orchestrator | 2026-04-05 02:16:53.317412 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-05 02:16:53.317443 | orchestrator | Sunday 05 April 2026 02:16:40 +0000 (0:00:01.629) 0:07:06.239 ********** 2026-04-05 02:16:53.317455 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:16:53.317466 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:16:53.317476 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:16:53.317487 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:16:53.317498 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:16:53.317508 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:16:53.317519 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:16:53.317530 | orchestrator | 2026-04-05 02:16:53.317542 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-05 02:16:53.317561 | orchestrator | Sunday 05 April 2026 02:16:42 +0000 (0:00:01.449) 0:07:07.688 ********** 2026-04-05 02:16:53.317581 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.317595 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:53.317612 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:53.317629 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:53.317647 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:53.317665 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:53.317684 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:53.317704 | orchestrator | 2026-04-05 02:16:53.317718 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-05 02:16:53.317729 | orchestrator | Sunday 05 April 2026 02:16:43 +0000 (0:00:01.356) 0:07:09.045 ********** 2026-04-05 02:16:53.317739 | orchestrator | changed: [testbed-manager] 2026-04-05 02:16:53.317750 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:16:53.317761 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:16:53.317771 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:16:53.317782 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:16:53.317792 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:16:53.317803 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:16:53.317814 | orchestrator | 2026-04-05 02:16:53.317835 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-05 02:16:53.317846 | orchestrator | Sunday 05 April 2026 02:16:45 +0000 (0:00:01.457) 0:07:10.502 ********** 2026-04-05 02:16:53.317857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:16:53.317868 | orchestrator | 2026-04-05 02:16:53.317879 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-05 02:16:53.317890 | orchestrator | Sunday 05 April 2026 02:16:46 +0000 (0:00:01.160) 0:07:11.662 ********** 2026-04-05 02:16:53.317901 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:16:53.317912 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.317922 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:16:53.317933 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:16:53.317944 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:16:53.317954 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:16:53.317965 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:16:53.317976 | orchestrator | 2026-04-05 02:16:53.317987 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-05 02:16:53.317998 | orchestrator | Sunday 05 April 2026 02:16:47 +0000 (0:00:01.530) 0:07:13.193 ********** 2026-04-05 02:16:53.318008 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:16:53.318086 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.318107 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:16:53.318124 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:16:53.318182 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:16:53.318220 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:16:53.318240 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:16:53.318256 | orchestrator | 2026-04-05 02:16:53.318267 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-05 02:16:53.318279 | orchestrator | Sunday 05 April 2026 02:16:49 +0000 (0:00:01.171) 0:07:14.365 ********** 2026-04-05 02:16:53.318289 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.318300 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:16:53.318311 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:16:53.318321 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:16:53.318332 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:16:53.318343 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:16:53.318353 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:16:53.318364 | orchestrator | 2026-04-05 02:16:53.318375 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-05 02:16:53.318386 | orchestrator | Sunday 05 April 2026 02:16:50 +0000 (0:00:01.220) 0:07:15.585 ********** 2026-04-05 02:16:53.318396 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:16:53.318407 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:16:53.318418 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:16:53.318428 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:16:53.318439 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:16:53.318450 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:16:53.318460 | orchestrator | ok: [testbed-manager] 2026-04-05 02:16:53.318471 | orchestrator | 2026-04-05 02:16:53.318482 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-05 02:16:53.318493 | orchestrator | Sunday 05 April 2026 02:16:52 +0000 (0:00:01.709) 0:07:17.295 ********** 2026-04-05 02:16:53.318503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:16:53.318515 | orchestrator | 2026-04-05 02:16:53.318526 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 02:16:53.318537 | orchestrator | Sunday 05 April 2026 02:16:52 +0000 (0:00:00.967) 0:07:18.263 ********** 2026-04-05 02:16:53.318548 | orchestrator | 2026-04-05 02:16:53.318558 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 02:16:53.318579 | orchestrator | Sunday 05 April 2026 02:16:53 +0000 (0:00:00.043) 0:07:18.306 ********** 2026-04-05 02:16:53.318590 | orchestrator | 2026-04-05 02:16:53.318600 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 02:16:53.318611 | orchestrator | Sunday 05 April 2026 02:16:53 +0000 (0:00:00.047) 0:07:18.354 ********** 2026-04-05 02:16:53.318622 | orchestrator | 2026-04-05 02:16:53.318633 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 02:16:53.318661 | orchestrator | Sunday 05 April 2026 02:16:53 +0000 (0:00:00.040) 0:07:18.395 ********** 2026-04-05 02:17:21.077520 | orchestrator | 2026-04-05 02:17:21.077632 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 02:17:21.077650 | orchestrator | Sunday 05 April 2026 02:16:53 +0000 (0:00:00.040) 0:07:18.435 ********** 2026-04-05 02:17:21.077662 | orchestrator | 2026-04-05 02:17:21.077673 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 02:17:21.077684 | orchestrator | Sunday 05 April 2026 02:16:53 +0000 (0:00:00.048) 0:07:18.484 ********** 2026-04-05 02:17:21.077695 | orchestrator | 2026-04-05 02:17:21.077706 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-05 02:17:21.077717 | orchestrator | Sunday 05 April 2026 02:16:53 +0000 (0:00:00.039) 0:07:18.524 ********** 2026-04-05 02:17:21.077728 | orchestrator | 2026-04-05 02:17:21.077739 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-05 02:17:21.077750 | orchestrator | Sunday 05 April 2026 02:16:53 +0000 (0:00:00.042) 0:07:18.566 ********** 2026-04-05 02:17:21.077761 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:21.077773 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:21.077784 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:21.077795 | orchestrator | 2026-04-05 02:17:21.077806 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-05 02:17:21.077817 | orchestrator | Sunday 05 April 2026 02:16:54 +0000 (0:00:01.205) 0:07:19.772 ********** 2026-04-05 02:17:21.077828 | orchestrator | changed: [testbed-manager] 2026-04-05 02:17:21.077840 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:21.077850 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:21.077861 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:21.077872 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:21.077883 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:21.077894 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:21.077904 | orchestrator | 2026-04-05 02:17:21.077915 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-05 02:17:21.077927 | orchestrator | Sunday 05 April 2026 02:16:56 +0000 (0:00:02.391) 0:07:22.163 ********** 2026-04-05 02:17:21.077937 | orchestrator | changed: [testbed-manager] 2026-04-05 02:17:21.077948 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:21.077959 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:21.077970 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:21.077981 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:21.077991 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:21.078002 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:21.078013 | orchestrator | 2026-04-05 02:17:21.078084 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-05 02:17:21.078099 | orchestrator | Sunday 05 April 2026 02:16:58 +0000 (0:00:01.269) 0:07:23.432 ********** 2026-04-05 02:17:21.078112 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:21.078124 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:21.078136 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:21.078149 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:21.078161 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:21.078197 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:21.078211 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:21.078224 | orchestrator | 2026-04-05 02:17:21.078237 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-05 02:17:21.078250 | orchestrator | Sunday 05 April 2026 02:17:00 +0000 (0:00:02.422) 0:07:25.855 ********** 2026-04-05 02:17:21.078304 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:21.078318 | orchestrator | 2026-04-05 02:17:21.078330 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-05 02:17:21.078344 | orchestrator | Sunday 05 April 2026 02:17:00 +0000 (0:00:00.140) 0:07:25.996 ********** 2026-04-05 02:17:21.078357 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:21.078369 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:21.078381 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:21.078395 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:21.078408 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:21.078421 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:21.078432 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:21.078442 | orchestrator | 2026-04-05 02:17:21.078454 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-05 02:17:21.078466 | orchestrator | Sunday 05 April 2026 02:17:01 +0000 (0:00:01.065) 0:07:27.062 ********** 2026-04-05 02:17:21.078476 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:21.078487 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:21.078498 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:17:21.078508 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:17:21.078519 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:17:21.078530 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:17:21.078541 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:17:21.078551 | orchestrator | 2026-04-05 02:17:21.078562 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-05 02:17:21.078573 | orchestrator | Sunday 05 April 2026 02:17:02 +0000 (0:00:00.593) 0:07:27.655 ********** 2026-04-05 02:17:21.078585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:17:21.078598 | orchestrator | 2026-04-05 02:17:21.078609 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-05 02:17:21.078620 | orchestrator | Sunday 05 April 2026 02:17:03 +0000 (0:00:01.120) 0:07:28.776 ********** 2026-04-05 02:17:21.078630 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:21.078641 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:21.078652 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:21.078663 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:21.078674 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:21.078684 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:21.078695 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:21.078706 | orchestrator | 2026-04-05 02:17:21.078717 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-05 02:17:21.078728 | orchestrator | Sunday 05 April 2026 02:17:04 +0000 (0:00:00.874) 0:07:29.650 ********** 2026-04-05 02:17:21.078739 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-05 02:17:21.078770 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-05 02:17:21.078782 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-05 02:17:21.078793 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-05 02:17:21.078804 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-05 02:17:21.078815 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-05 02:17:21.078826 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-05 02:17:21.078837 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-05 02:17:21.078848 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-05 02:17:21.078859 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-05 02:17:21.078870 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-05 02:17:21.078880 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-05 02:17:21.078899 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-05 02:17:21.078910 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-05 02:17:21.078921 | orchestrator | 2026-04-05 02:17:21.078933 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-05 02:17:21.078944 | orchestrator | Sunday 05 April 2026 02:17:06 +0000 (0:00:02.463) 0:07:32.114 ********** 2026-04-05 02:17:21.078955 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:21.078966 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:21.078977 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:17:21.078988 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:17:21.079008 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:17:21.079026 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:17:21.079055 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:17:21.079075 | orchestrator | 2026-04-05 02:17:21.079094 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-05 02:17:21.079114 | orchestrator | Sunday 05 April 2026 02:17:07 +0000 (0:00:00.765) 0:07:32.879 ********** 2026-04-05 02:17:21.079135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:17:21.079157 | orchestrator | 2026-04-05 02:17:21.079239 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-05 02:17:21.079263 | orchestrator | Sunday 05 April 2026 02:17:08 +0000 (0:00:00.938) 0:07:33.818 ********** 2026-04-05 02:17:21.079281 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:21.079301 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:21.079320 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:21.079339 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:21.079360 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:21.079378 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:21.079398 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:21.079410 | orchestrator | 2026-04-05 02:17:21.079421 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-05 02:17:21.079432 | orchestrator | Sunday 05 April 2026 02:17:09 +0000 (0:00:00.960) 0:07:34.779 ********** 2026-04-05 02:17:21.079451 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:21.079462 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:21.079473 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:21.079483 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:21.079494 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:21.079505 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:21.079515 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:21.079526 | orchestrator | 2026-04-05 02:17:21.079537 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-05 02:17:21.079548 | orchestrator | Sunday 05 April 2026 02:17:10 +0000 (0:00:01.094) 0:07:35.873 ********** 2026-04-05 02:17:21.079559 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:21.079570 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:21.079580 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:17:21.079591 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:17:21.079602 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:17:21.079612 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:17:21.079623 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:17:21.079633 | orchestrator | 2026-04-05 02:17:21.079644 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-05 02:17:21.079655 | orchestrator | Sunday 05 April 2026 02:17:11 +0000 (0:00:00.559) 0:07:36.432 ********** 2026-04-05 02:17:21.079666 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:21.079677 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:21.079687 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:21.079698 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:21.079709 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:21.079731 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:21.079742 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:21.079753 | orchestrator | 2026-04-05 02:17:21.079764 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-05 02:17:21.079777 | orchestrator | Sunday 05 April 2026 02:17:12 +0000 (0:00:01.549) 0:07:37.982 ********** 2026-04-05 02:17:21.079794 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:21.079819 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:21.079842 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:17:21.079858 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:17:21.079874 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:17:21.079890 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:17:21.079907 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:17:21.079923 | orchestrator | 2026-04-05 02:17:21.079940 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-05 02:17:21.079958 | orchestrator | Sunday 05 April 2026 02:17:13 +0000 (0:00:00.551) 0:07:38.534 ********** 2026-04-05 02:17:21.079975 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:21.079995 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:21.080014 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:21.080032 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:21.080048 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:21.080059 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:21.080082 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:55.246686 | orchestrator | 2026-04-05 02:17:55.246828 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-05 02:17:55.246855 | orchestrator | Sunday 05 April 2026 02:17:21 +0000 (0:00:07.790) 0:07:46.325 ********** 2026-04-05 02:17:55.246876 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.246895 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:55.246915 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:55.246932 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:55.246950 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:55.246967 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:55.246984 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:55.247001 | orchestrator | 2026-04-05 02:17:55.247019 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-05 02:17:55.247036 | orchestrator | Sunday 05 April 2026 02:17:22 +0000 (0:00:01.820) 0:07:48.145 ********** 2026-04-05 02:17:55.247054 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.247072 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:55.247090 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:55.247107 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:55.247127 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:55.247145 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:55.247163 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:55.247180 | orchestrator | 2026-04-05 02:17:55.247199 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-05 02:17:55.247255 | orchestrator | Sunday 05 April 2026 02:17:24 +0000 (0:00:01.970) 0:07:50.115 ********** 2026-04-05 02:17:55.247275 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.247295 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:55.247315 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:55.247335 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:55.247354 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:55.247376 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:55.247396 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:55.247416 | orchestrator | 2026-04-05 02:17:55.247432 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 02:17:55.247444 | orchestrator | Sunday 05 April 2026 02:17:26 +0000 (0:00:01.920) 0:07:52.035 ********** 2026-04-05 02:17:55.247457 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.247470 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.247483 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.247524 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.247538 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.247551 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.247562 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.247573 | orchestrator | 2026-04-05 02:17:55.247584 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 02:17:55.247595 | orchestrator | Sunday 05 April 2026 02:17:27 +0000 (0:00:00.881) 0:07:52.917 ********** 2026-04-05 02:17:55.247606 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:55.247618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:55.247628 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:17:55.247639 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:17:55.247650 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:17:55.247661 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:17:55.247672 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:17:55.247682 | orchestrator | 2026-04-05 02:17:55.247693 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-05 02:17:55.247705 | orchestrator | Sunday 05 April 2026 02:17:28 +0000 (0:00:01.234) 0:07:54.151 ********** 2026-04-05 02:17:55.247716 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:55.247727 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:55.247737 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:17:55.247748 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:17:55.247759 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:17:55.247769 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:17:55.247780 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:17:55.247791 | orchestrator | 2026-04-05 02:17:55.247801 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-05 02:17:55.247812 | orchestrator | Sunday 05 April 2026 02:17:29 +0000 (0:00:00.582) 0:07:54.734 ********** 2026-04-05 02:17:55.247823 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.247852 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.247863 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.247874 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.247884 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.247893 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.247903 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.247912 | orchestrator | 2026-04-05 02:17:55.247922 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-05 02:17:55.247931 | orchestrator | Sunday 05 April 2026 02:17:30 +0000 (0:00:00.578) 0:07:55.312 ********** 2026-04-05 02:17:55.247941 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.247950 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.247960 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.247970 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.247980 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.247989 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.247999 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.248008 | orchestrator | 2026-04-05 02:17:55.248018 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-05 02:17:55.248028 | orchestrator | Sunday 05 April 2026 02:17:30 +0000 (0:00:00.640) 0:07:55.953 ********** 2026-04-05 02:17:55.248037 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.248047 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.248056 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.248066 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.248075 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.248084 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.248094 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.248103 | orchestrator | 2026-04-05 02:17:55.248113 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-05 02:17:55.248122 | orchestrator | Sunday 05 April 2026 02:17:31 +0000 (0:00:00.887) 0:07:56.840 ********** 2026-04-05 02:17:55.248132 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.248141 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.248158 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.248167 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.248177 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.248186 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.248196 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.248205 | orchestrator | 2026-04-05 02:17:55.248281 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-05 02:17:55.248292 | orchestrator | Sunday 05 April 2026 02:17:37 +0000 (0:00:05.433) 0:08:02.274 ********** 2026-04-05 02:17:55.248302 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:17:55.248312 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:17:55.248322 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:17:55.248331 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:17:55.248341 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:17:55.248351 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:17:55.248360 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:17:55.248370 | orchestrator | 2026-04-05 02:17:55.248380 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-05 02:17:55.248390 | orchestrator | Sunday 05 April 2026 02:17:37 +0000 (0:00:00.535) 0:08:02.810 ********** 2026-04-05 02:17:55.248402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:17:55.248414 | orchestrator | 2026-04-05 02:17:55.248424 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-05 02:17:55.248434 | orchestrator | Sunday 05 April 2026 02:17:38 +0000 (0:00:01.028) 0:08:03.839 ********** 2026-04-05 02:17:55.248444 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.248453 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.248463 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.248473 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.248482 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.248492 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.248501 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.248511 | orchestrator | 2026-04-05 02:17:55.248521 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-05 02:17:55.248531 | orchestrator | Sunday 05 April 2026 02:17:40 +0000 (0:00:01.896) 0:08:05.735 ********** 2026-04-05 02:17:55.248540 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.248550 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.248559 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.248569 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.248578 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.248588 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.248597 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.248606 | orchestrator | 2026-04-05 02:17:55.248616 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-05 02:17:55.248626 | orchestrator | Sunday 05 April 2026 02:17:41 +0000 (0:00:01.241) 0:08:06.976 ********** 2026-04-05 02:17:55.248635 | orchestrator | ok: [testbed-manager] 2026-04-05 02:17:55.248645 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:17:55.248655 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:17:55.248664 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:17:55.248674 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:17:55.248684 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:17:55.248693 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:17:55.248702 | orchestrator | 2026-04-05 02:17:55.248712 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-05 02:17:55.248722 | orchestrator | Sunday 05 April 2026 02:17:42 +0000 (0:00:00.881) 0:08:07.858 ********** 2026-04-05 02:17:55.248737 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 02:17:55.248749 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 02:17:55.248766 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 02:17:55.248776 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 02:17:55.248785 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 02:17:55.248795 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 02:17:55.248805 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-05 02:17:55.248814 | orchestrator | 2026-04-05 02:17:55.248824 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-05 02:17:55.248834 | orchestrator | Sunday 05 April 2026 02:17:44 +0000 (0:00:01.939) 0:08:09.797 ********** 2026-04-05 02:17:55.248844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:17:55.248854 | orchestrator | 2026-04-05 02:17:55.248863 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-05 02:17:55.248873 | orchestrator | Sunday 05 April 2026 02:17:45 +0000 (0:00:00.950) 0:08:10.748 ********** 2026-04-05 02:17:55.248883 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:17:55.248892 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:17:55.248902 | orchestrator | changed: [testbed-manager] 2026-04-05 02:17:55.248912 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:17:55.248921 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:17:55.248931 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:17:55.248940 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:17:55.248950 | orchestrator | 2026-04-05 02:17:55.248965 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-05 02:18:28.079648 | orchestrator | Sunday 05 April 2026 02:17:55 +0000 (0:00:09.752) 0:08:20.501 ********** 2026-04-05 02:18:28.079737 | orchestrator | ok: [testbed-manager] 2026-04-05 02:18:28.079750 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:18:28.079757 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:18:28.079765 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:18:28.079772 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:18:28.079779 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:18:28.079786 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:18:28.079795 | orchestrator | 2026-04-05 02:18:28.079801 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-05 02:18:28.079807 | orchestrator | Sunday 05 April 2026 02:17:57 +0000 (0:00:02.099) 0:08:22.600 ********** 2026-04-05 02:18:28.079812 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:18:28.079817 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:18:28.079825 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:18:28.079833 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:18:28.079841 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:18:28.079847 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:18:28.079852 | orchestrator | 2026-04-05 02:18:28.079857 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-05 02:18:28.079862 | orchestrator | Sunday 05 April 2026 02:17:58 +0000 (0:00:01.370) 0:08:23.971 ********** 2026-04-05 02:18:28.079867 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.079873 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.079877 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.079882 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.079887 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.079911 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.079916 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.079921 | orchestrator | 2026-04-05 02:18:28.079927 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-05 02:18:28.079932 | orchestrator | 2026-04-05 02:18:28.079937 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-05 02:18:28.079942 | orchestrator | Sunday 05 April 2026 02:18:00 +0000 (0:00:01.313) 0:08:25.285 ********** 2026-04-05 02:18:28.079947 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:18:28.079952 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:18:28.079957 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:18:28.079962 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:18:28.079970 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:18:28.079978 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:18:28.079986 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:18:28.079996 | orchestrator | 2026-04-05 02:18:28.080002 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-05 02:18:28.080007 | orchestrator | 2026-04-05 02:18:28.080012 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-05 02:18:28.080020 | orchestrator | Sunday 05 April 2026 02:18:00 +0000 (0:00:00.856) 0:08:26.142 ********** 2026-04-05 02:18:28.080027 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080035 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080042 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080051 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080059 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080067 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080074 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080079 | orchestrator | 2026-04-05 02:18:28.080085 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-05 02:18:28.080109 | orchestrator | Sunday 05 April 2026 02:18:02 +0000 (0:00:01.361) 0:08:27.503 ********** 2026-04-05 02:18:28.080118 | orchestrator | ok: [testbed-manager] 2026-04-05 02:18:28.080125 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:18:28.080133 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:18:28.080139 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:18:28.080144 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:18:28.080149 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:18:28.080154 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:18:28.080159 | orchestrator | 2026-04-05 02:18:28.080164 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-05 02:18:28.080169 | orchestrator | Sunday 05 April 2026 02:18:03 +0000 (0:00:01.492) 0:08:28.996 ********** 2026-04-05 02:18:28.080174 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:18:28.080179 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:18:28.080184 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:18:28.080189 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:18:28.080194 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:18:28.080199 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:18:28.080204 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:18:28.080210 | orchestrator | 2026-04-05 02:18:28.080216 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-05 02:18:28.080222 | orchestrator | Sunday 05 April 2026 02:18:04 +0000 (0:00:00.523) 0:08:29.519 ********** 2026-04-05 02:18:28.080229 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:18:28.080236 | orchestrator | 2026-04-05 02:18:28.080264 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-05 02:18:28.080270 | orchestrator | Sunday 05 April 2026 02:18:05 +0000 (0:00:01.063) 0:08:30.582 ********** 2026-04-05 02:18:28.080279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:18:28.080296 | orchestrator | 2026-04-05 02:18:28.080305 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-05 02:18:28.080315 | orchestrator | Sunday 05 April 2026 02:18:06 +0000 (0:00:00.846) 0:08:31.429 ********** 2026-04-05 02:18:28.080323 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080332 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080338 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080343 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080348 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080353 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080358 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080363 | orchestrator | 2026-04-05 02:18:28.080382 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-05 02:18:28.080392 | orchestrator | Sunday 05 April 2026 02:18:15 +0000 (0:00:09.009) 0:08:40.438 ********** 2026-04-05 02:18:28.080400 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080408 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080416 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080424 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080432 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080439 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080460 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080469 | orchestrator | 2026-04-05 02:18:28.080477 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-05 02:18:28.080486 | orchestrator | Sunday 05 April 2026 02:18:16 +0000 (0:00:01.130) 0:08:41.569 ********** 2026-04-05 02:18:28.080494 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080503 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080512 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080521 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080530 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080538 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080543 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080548 | orchestrator | 2026-04-05 02:18:28.080553 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-05 02:18:28.080559 | orchestrator | Sunday 05 April 2026 02:18:17 +0000 (0:00:01.489) 0:08:43.059 ********** 2026-04-05 02:18:28.080564 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080569 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080574 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080579 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080584 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080589 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080594 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080598 | orchestrator | 2026-04-05 02:18:28.080604 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-05 02:18:28.080609 | orchestrator | Sunday 05 April 2026 02:18:19 +0000 (0:00:02.017) 0:08:45.076 ********** 2026-04-05 02:18:28.080614 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080619 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080624 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080628 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080633 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080638 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080643 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080648 | orchestrator | 2026-04-05 02:18:28.080653 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-05 02:18:28.080659 | orchestrator | Sunday 05 April 2026 02:18:21 +0000 (0:00:01.445) 0:08:46.521 ********** 2026-04-05 02:18:28.080663 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080668 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080679 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080684 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080689 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080694 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080699 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080704 | orchestrator | 2026-04-05 02:18:28.080709 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-05 02:18:28.080714 | orchestrator | 2026-04-05 02:18:28.080724 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-05 02:18:28.080729 | orchestrator | Sunday 05 April 2026 02:18:22 +0000 (0:00:01.208) 0:08:47.730 ********** 2026-04-05 02:18:28.080738 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:18:28.080746 | orchestrator | 2026-04-05 02:18:28.080754 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 02:18:28.080762 | orchestrator | Sunday 05 April 2026 02:18:23 +0000 (0:00:00.887) 0:08:48.617 ********** 2026-04-05 02:18:28.080771 | orchestrator | ok: [testbed-manager] 2026-04-05 02:18:28.080779 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:18:28.080788 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:18:28.080795 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:18:28.080800 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:18:28.080805 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:18:28.080810 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:18:28.080815 | orchestrator | 2026-04-05 02:18:28.080820 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 02:18:28.080825 | orchestrator | Sunday 05 April 2026 02:18:24 +0000 (0:00:01.196) 0:08:49.814 ********** 2026-04-05 02:18:28.080830 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:28.080835 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:28.080840 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:28.080845 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:28.080850 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:28.080855 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:28.080864 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:28.080872 | orchestrator | 2026-04-05 02:18:28.080880 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-05 02:18:28.080887 | orchestrator | Sunday 05 April 2026 02:18:25 +0000 (0:00:01.363) 0:08:51.177 ********** 2026-04-05 02:18:28.080895 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:18:28.080903 | orchestrator | 2026-04-05 02:18:28.080911 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-05 02:18:28.080920 | orchestrator | Sunday 05 April 2026 02:18:27 +0000 (0:00:01.250) 0:08:52.428 ********** 2026-04-05 02:18:28.080928 | orchestrator | ok: [testbed-manager] 2026-04-05 02:18:28.080937 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:18:28.080943 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:18:28.080948 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:18:28.080953 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:18:28.080958 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:18:28.080963 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:18:28.080968 | orchestrator | 2026-04-05 02:18:28.080980 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-05 02:18:29.746503 | orchestrator | Sunday 05 April 2026 02:18:28 +0000 (0:00:00.895) 0:08:53.324 ********** 2026-04-05 02:18:29.746596 | orchestrator | changed: [testbed-manager] 2026-04-05 02:18:29.746610 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:18:29.746621 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:18:29.746630 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:18:29.746640 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:18:29.746650 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:18:29.746659 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:18:29.746698 | orchestrator | 2026-04-05 02:18:29.746710 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:18:29.746721 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-05 02:18:29.746733 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 02:18:29.746743 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 02:18:29.746752 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 02:18:29.746762 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-04-05 02:18:29.746772 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 02:18:29.746781 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-05 02:18:29.746791 | orchestrator | 2026-04-05 02:18:29.746801 | orchestrator | 2026-04-05 02:18:29.746810 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:18:29.746820 | orchestrator | Sunday 05 April 2026 02:18:29 +0000 (0:00:01.128) 0:08:54.452 ********** 2026-04-05 02:18:29.746830 | orchestrator | =============================================================================== 2026-04-05 02:18:29.746839 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.58s 2026-04-05 02:18:29.746849 | orchestrator | osism.commons.packages : Download required packages -------------------- 61.24s 2026-04-05 02:18:29.746858 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.76s 2026-04-05 02:18:29.746868 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.29s 2026-04-05 02:18:29.746877 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 14.41s 2026-04-05 02:18:29.746900 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.87s 2026-04-05 02:18:29.746911 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.50s 2026-04-05 02:18:29.746921 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.86s 2026-04-05 02:18:29.746931 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.75s 2026-04-05 02:18:29.746941 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.01s 2026-04-05 02:18:29.746950 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.01s 2026-04-05 02:18:29.746960 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.68s 2026-04-05 02:18:29.746969 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.30s 2026-04-05 02:18:29.746979 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.21s 2026-04-05 02:18:29.746988 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.79s 2026-04-05 02:18:29.746998 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.52s 2026-04-05 02:18:29.747007 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.88s 2026-04-05 02:18:29.747017 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.71s 2026-04-05 02:18:29.747026 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.28s 2026-04-05 02:18:29.747036 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.84s 2026-04-05 02:18:30.130451 | orchestrator | + osism apply fail2ban 2026-04-05 02:18:43.254853 | orchestrator | 2026-04-05 02:18:43 | INFO  | Task c1d8e4fd-5bf0-4507-8551-3e9a22ceb95d (fail2ban) was prepared for execution. 2026-04-05 02:18:43.254986 | orchestrator | 2026-04-05 02:18:43 | INFO  | It takes a moment until task c1d8e4fd-5bf0-4507-8551-3e9a22ceb95d (fail2ban) has been started and output is visible here. 2026-04-05 02:19:05.173100 | orchestrator | 2026-04-05 02:19:05.173237 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-05 02:19:05.173255 | orchestrator | 2026-04-05 02:19:05.173309 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-05 02:19:05.173324 | orchestrator | Sunday 05 April 2026 02:18:47 +0000 (0:00:00.277) 0:00:00.277 ********** 2026-04-05 02:19:05.173336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:19:05.173348 | orchestrator | 2026-04-05 02:19:05.173358 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-05 02:19:05.173368 | orchestrator | Sunday 05 April 2026 02:18:48 +0000 (0:00:01.190) 0:00:01.468 ********** 2026-04-05 02:19:05.173378 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:19:05.173389 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:19:05.173404 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:19:05.173421 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:19:05.173437 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:19:05.173453 | orchestrator | changed: [testbed-manager] 2026-04-05 02:19:05.173471 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:19:05.173489 | orchestrator | 2026-04-05 02:19:05.173501 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-05 02:19:05.173511 | orchestrator | Sunday 05 April 2026 02:19:00 +0000 (0:00:11.064) 0:00:12.533 ********** 2026-04-05 02:19:05.173520 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:19:05.173530 | orchestrator | changed: [testbed-manager] 2026-04-05 02:19:05.173540 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:19:05.173549 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:19:05.173559 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:19:05.173568 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:19:05.173578 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:19:05.173587 | orchestrator | 2026-04-05 02:19:05.173597 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-05 02:19:05.173607 | orchestrator | Sunday 05 April 2026 02:19:01 +0000 (0:00:01.497) 0:00:14.031 ********** 2026-04-05 02:19:05.173617 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:05.173627 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:05.173637 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:05.173647 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:05.173658 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:05.173670 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:05.173682 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:05.173693 | orchestrator | 2026-04-05 02:19:05.173705 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-05 02:19:05.173718 | orchestrator | Sunday 05 April 2026 02:19:03 +0000 (0:00:01.534) 0:00:15.565 ********** 2026-04-05 02:19:05.173731 | orchestrator | changed: [testbed-manager] 2026-04-05 02:19:05.173743 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:19:05.173756 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:19:05.173769 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:19:05.173782 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:19:05.173794 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:19:05.173806 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:19:05.173819 | orchestrator | 2026-04-05 02:19:05.173833 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:19:05.173846 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:19:05.173891 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:19:05.173905 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:19:05.173918 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:19:05.173931 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:19:05.173944 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:19:05.173958 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:19:05.173970 | orchestrator | 2026-04-05 02:19:05.173983 | orchestrator | 2026-04-05 02:19:05.173996 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:19:05.174009 | orchestrator | Sunday 05 April 2026 02:19:04 +0000 (0:00:01.641) 0:00:17.207 ********** 2026-04-05 02:19:05.174087 | orchestrator | =============================================================================== 2026-04-05 02:19:05.174099 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.06s 2026-04-05 02:19:05.174110 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.64s 2026-04-05 02:19:05.174120 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.53s 2026-04-05 02:19:05.174131 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.50s 2026-04-05 02:19:05.174142 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.19s 2026-04-05 02:19:05.509390 | orchestrator | + osism apply network 2026-04-05 02:19:17.610401 | orchestrator | 2026-04-05 02:19:17 | INFO  | Task 8b20f1e1-79c6-4f4e-b58f-f4480ecebfc1 (network) was prepared for execution. 2026-04-05 02:19:17.610517 | orchestrator | 2026-04-05 02:19:17 | INFO  | It takes a moment until task 8b20f1e1-79c6-4f4e-b58f-f4480ecebfc1 (network) has been started and output is visible here. 2026-04-05 02:19:48.292597 | orchestrator | 2026-04-05 02:19:48.292678 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-05 02:19:48.292687 | orchestrator | 2026-04-05 02:19:48.292692 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-05 02:19:48.292698 | orchestrator | Sunday 05 April 2026 02:19:22 +0000 (0:00:00.273) 0:00:00.273 ********** 2026-04-05 02:19:48.292703 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.292709 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:48.292714 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:48.292719 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:48.292724 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:48.292728 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:48.292733 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:48.292738 | orchestrator | 2026-04-05 02:19:48.292743 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-05 02:19:48.292748 | orchestrator | Sunday 05 April 2026 02:19:22 +0000 (0:00:00.758) 0:00:01.031 ********** 2026-04-05 02:19:48.292755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:19:48.292762 | orchestrator | 2026-04-05 02:19:48.292767 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-05 02:19:48.292772 | orchestrator | Sunday 05 April 2026 02:19:24 +0000 (0:00:01.284) 0:00:02.316 ********** 2026-04-05 02:19:48.292792 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.292797 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:48.292802 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:48.292807 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:48.292811 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:48.292816 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:48.292821 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:48.292825 | orchestrator | 2026-04-05 02:19:48.292830 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-05 02:19:48.292835 | orchestrator | Sunday 05 April 2026 02:19:26 +0000 (0:00:02.205) 0:00:04.522 ********** 2026-04-05 02:19:48.292840 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.292844 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:48.292849 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:48.292854 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:48.292859 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:48.292863 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:48.292868 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:48.292873 | orchestrator | 2026-04-05 02:19:48.292878 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-05 02:19:48.292882 | orchestrator | Sunday 05 April 2026 02:19:28 +0000 (0:00:01.862) 0:00:06.384 ********** 2026-04-05 02:19:48.292887 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-05 02:19:48.292893 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-05 02:19:48.292897 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-05 02:19:48.292902 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-05 02:19:48.292907 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-05 02:19:48.292911 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-05 02:19:48.292916 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-05 02:19:48.292921 | orchestrator | 2026-04-05 02:19:48.292937 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-05 02:19:48.292942 | orchestrator | Sunday 05 April 2026 02:19:29 +0000 (0:00:01.160) 0:00:07.545 ********** 2026-04-05 02:19:48.292950 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 02:19:48.292956 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 02:19:48.292960 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 02:19:48.292965 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 02:19:48.292970 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 02:19:48.292975 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 02:19:48.292979 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 02:19:48.292984 | orchestrator | 2026-04-05 02:19:48.292989 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-05 02:19:48.292993 | orchestrator | Sunday 05 April 2026 02:19:33 +0000 (0:00:03.733) 0:00:11.279 ********** 2026-04-05 02:19:48.292998 | orchestrator | changed: [testbed-manager] 2026-04-05 02:19:48.293003 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:19:48.293007 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:19:48.293012 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:19:48.293017 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:19:48.293021 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:19:48.293026 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:19:48.293031 | orchestrator | 2026-04-05 02:19:48.293036 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-05 02:19:48.293040 | orchestrator | Sunday 05 April 2026 02:19:34 +0000 (0:00:01.735) 0:00:13.014 ********** 2026-04-05 02:19:48.293045 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 02:19:48.293050 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 02:19:48.293054 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 02:19:48.293059 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 02:19:48.293064 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 02:19:48.293073 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 02:19:48.293077 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 02:19:48.293082 | orchestrator | 2026-04-05 02:19:48.293087 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-05 02:19:48.293092 | orchestrator | Sunday 05 April 2026 02:19:36 +0000 (0:00:01.813) 0:00:14.828 ********** 2026-04-05 02:19:48.293096 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.293101 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:48.293106 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:48.293110 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:48.293115 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:48.293120 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:48.293124 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:48.293129 | orchestrator | 2026-04-05 02:19:48.293134 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-05 02:19:48.293151 | orchestrator | Sunday 05 April 2026 02:19:37 +0000 (0:00:01.234) 0:00:16.062 ********** 2026-04-05 02:19:48.293156 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:19:48.293161 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:19:48.293166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:19:48.293171 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:19:48.293177 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:19:48.293182 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:19:48.293188 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:19:48.293193 | orchestrator | 2026-04-05 02:19:48.293199 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-05 02:19:48.293205 | orchestrator | Sunday 05 April 2026 02:19:38 +0000 (0:00:00.759) 0:00:16.822 ********** 2026-04-05 02:19:48.293210 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.293216 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:48.293221 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:48.293226 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:48.293232 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:48.293237 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:48.293243 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:48.293248 | orchestrator | 2026-04-05 02:19:48.293253 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-05 02:19:48.293259 | orchestrator | Sunday 05 April 2026 02:19:40 +0000 (0:00:02.328) 0:00:19.150 ********** 2026-04-05 02:19:48.293265 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:19:48.293270 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:19:48.293276 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:19:48.293281 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:19:48.293287 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:19:48.293292 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:19:48.293350 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-04-05 02:19:48.293357 | orchestrator | 2026-04-05 02:19:48.293363 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-05 02:19:48.293368 | orchestrator | Sunday 05 April 2026 02:19:41 +0000 (0:00:00.962) 0:00:20.112 ********** 2026-04-05 02:19:48.293374 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.293380 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:19:48.293385 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:19:48.293391 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:19:48.293396 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:19:48.293402 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:19:48.293407 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:19:48.293413 | orchestrator | 2026-04-05 02:19:48.293419 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-05 02:19:48.293424 | orchestrator | Sunday 05 April 2026 02:19:43 +0000 (0:00:01.721) 0:00:21.834 ********** 2026-04-05 02:19:48.293430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:19:48.293442 | orchestrator | 2026-04-05 02:19:48.293447 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 02:19:48.293451 | orchestrator | Sunday 05 April 2026 02:19:44 +0000 (0:00:01.339) 0:00:23.174 ********** 2026-04-05 02:19:48.293456 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.293461 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:48.293466 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:48.293470 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:48.293475 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:48.293483 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:48.293488 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:48.293493 | orchestrator | 2026-04-05 02:19:48.293498 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-05 02:19:48.293502 | orchestrator | Sunday 05 April 2026 02:19:46 +0000 (0:00:01.238) 0:00:24.413 ********** 2026-04-05 02:19:48.293507 | orchestrator | ok: [testbed-manager] 2026-04-05 02:19:48.293512 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:19:48.293517 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:19:48.293521 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:19:48.293526 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:19:48.293531 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:19:48.293535 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:19:48.293540 | orchestrator | 2026-04-05 02:19:48.293545 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 02:19:48.293550 | orchestrator | Sunday 05 April 2026 02:19:46 +0000 (0:00:00.745) 0:00:25.158 ********** 2026-04-05 02:19:48.293554 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 02:19:48.293559 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 02:19:48.293564 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 02:19:48.293569 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 02:19:48.293573 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 02:19:48.293578 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 02:19:48.293583 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 02:19:48.293588 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 02:19:48.293592 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 02:19:48.293597 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 02:19:48.293602 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 02:19:48.293606 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 02:19:48.293611 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-05 02:19:48.293616 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-05 02:19:48.293621 | orchestrator | 2026-04-05 02:19:48.293629 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-05 02:20:04.687394 | orchestrator | Sunday 05 April 2026 02:19:48 +0000 (0:00:01.389) 0:00:26.548 ********** 2026-04-05 02:20:04.687522 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:20:04.687538 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:20:04.687548 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:20:04.687558 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:20:04.688502 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:20:04.688557 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:20:04.688576 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:20:04.688596 | orchestrator | 2026-04-05 02:20:04.688618 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-05 02:20:04.688674 | orchestrator | Sunday 05 April 2026 02:19:48 +0000 (0:00:00.699) 0:00:27.248 ********** 2026-04-05 02:20:04.688699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-5, testbed-node-2, testbed-node-3, testbed-node-4 2026-04-05 02:20:04.688721 | orchestrator | 2026-04-05 02:20:04.688735 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-05 02:20:04.688746 | orchestrator | Sunday 05 April 2026 02:19:53 +0000 (0:00:04.732) 0:00:31.980 ********** 2026-04-05 02:20:04.688758 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.688772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.688784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.688795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.688806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.688834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.688859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.688870 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.688888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.688899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.688910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.688948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.688971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.688982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.688993 | orchestrator | 2026-04-05 02:20:04.689004 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-05 02:20:04.689016 | orchestrator | Sunday 05 April 2026 02:19:59 +0000 (0:00:05.679) 0:00:37.660 ********** 2026-04-05 02:20:04.689028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.689039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.689050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.689061 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.689072 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.689089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.689101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.689112 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-05 02:20:04.689123 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.689134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.689145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.689163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:04.689186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:10.931717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-05 02:20:10.931813 | orchestrator | 2026-04-05 02:20:10.931828 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-05 02:20:10.931839 | orchestrator | Sunday 05 April 2026 02:20:04 +0000 (0:00:05.275) 0:00:42.936 ********** 2026-04-05 02:20:10.931850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:20:10.931859 | orchestrator | 2026-04-05 02:20:10.931869 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-05 02:20:10.931877 | orchestrator | Sunday 05 April 2026 02:20:06 +0000 (0:00:01.430) 0:00:44.367 ********** 2026-04-05 02:20:10.931886 | orchestrator | ok: [testbed-manager] 2026-04-05 02:20:10.931895 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:20:10.931904 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:20:10.931913 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:20:10.931921 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:20:10.931930 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:20:10.931939 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:20:10.931947 | orchestrator | 2026-04-05 02:20:10.931956 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-05 02:20:10.931965 | orchestrator | Sunday 05 April 2026 02:20:07 +0000 (0:00:01.206) 0:00:45.573 ********** 2026-04-05 02:20:10.931974 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 02:20:10.931984 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 02:20:10.931992 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 02:20:10.932001 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 02:20:10.932010 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 02:20:10.932019 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 02:20:10.932028 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 02:20:10.932036 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 02:20:10.932045 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:20:10.932054 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 02:20:10.932083 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 02:20:10.932115 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 02:20:10.932136 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 02:20:10.932151 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:20:10.932165 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 02:20:10.932202 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 02:20:10.932218 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 02:20:10.932232 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 02:20:10.932247 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:20:10.932261 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 02:20:10.932277 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 02:20:10.932292 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 02:20:10.932308 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 02:20:10.932343 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:20:10.932353 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 02:20:10.932363 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 02:20:10.932373 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 02:20:10.932383 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 02:20:10.932394 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:20:10.932404 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:20:10.932415 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-05 02:20:10.932425 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-05 02:20:10.932434 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-05 02:20:10.932444 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-05 02:20:10.932454 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:20:10.932464 | orchestrator | 2026-04-05 02:20:10.932474 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-05 02:20:10.932513 | orchestrator | Sunday 05 April 2026 02:20:09 +0000 (0:00:02.058) 0:00:47.632 ********** 2026-04-05 02:20:10.932524 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:20:10.932534 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:20:10.932545 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:20:10.932556 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:20:10.932566 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:20:10.932576 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:20:10.932586 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:20:10.932596 | orchestrator | 2026-04-05 02:20:10.932606 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-05 02:20:10.932616 | orchestrator | Sunday 05 April 2026 02:20:09 +0000 (0:00:00.638) 0:00:48.270 ********** 2026-04-05 02:20:10.932626 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:20:10.932636 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:20:10.932647 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:20:10.932657 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:20:10.932666 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:20:10.932675 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:20:10.932683 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:20:10.932692 | orchestrator | 2026-04-05 02:20:10.932700 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:20:10.932710 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 02:20:10.932719 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 02:20:10.932747 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 02:20:10.932757 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 02:20:10.932765 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 02:20:10.932774 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 02:20:10.932783 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 02:20:10.932791 | orchestrator | 2026-04-05 02:20:10.932803 | orchestrator | 2026-04-05 02:20:10.932818 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:20:10.932838 | orchestrator | Sunday 05 April 2026 02:20:10 +0000 (0:00:00.634) 0:00:48.904 ********** 2026-04-05 02:20:10.932857 | orchestrator | =============================================================================== 2026-04-05 02:20:10.932879 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.68s 2026-04-05 02:20:10.932895 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.28s 2026-04-05 02:20:10.932910 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.73s 2026-04-05 02:20:10.932924 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.73s 2026-04-05 02:20:10.932940 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.33s 2026-04-05 02:20:10.932955 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.21s 2026-04-05 02:20:10.932970 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.06s 2026-04-05 02:20:10.932985 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.86s 2026-04-05 02:20:10.932996 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2026-04-05 02:20:10.933004 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.74s 2026-04-05 02:20:10.933013 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-04-05 02:20:10.933021 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.43s 2026-04-05 02:20:10.933030 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.39s 2026-04-05 02:20:10.933039 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.34s 2026-04-05 02:20:10.933047 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2026-04-05 02:20:10.933055 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.24s 2026-04-05 02:20:10.933064 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.23s 2026-04-05 02:20:10.933072 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-04-05 02:20:10.933081 | orchestrator | osism.commons.network : Create required directories --------------------- 1.16s 2026-04-05 02:20:10.933089 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.96s 2026-04-05 02:20:11.171759 | orchestrator | + osism apply wireguard 2026-04-05 02:20:23.210759 | orchestrator | 2026-04-05 02:20:23 | INFO  | Task bc486910-bc93-4510-8dce-abfc7ca369e9 (wireguard) was prepared for execution. 2026-04-05 02:20:23.210841 | orchestrator | 2026-04-05 02:20:23 | INFO  | It takes a moment until task bc486910-bc93-4510-8dce-abfc7ca369e9 (wireguard) has been started and output is visible here. 2026-04-05 02:20:44.959484 | orchestrator | 2026-04-05 02:20:44.959614 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-05 02:20:44.959661 | orchestrator | 2026-04-05 02:20:44.959674 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-05 02:20:44.959686 | orchestrator | Sunday 05 April 2026 02:20:27 +0000 (0:00:00.249) 0:00:00.249 ********** 2026-04-05 02:20:44.959697 | orchestrator | ok: [testbed-manager] 2026-04-05 02:20:44.959709 | orchestrator | 2026-04-05 02:20:44.959736 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-05 02:20:44.959748 | orchestrator | Sunday 05 April 2026 02:20:29 +0000 (0:00:01.581) 0:00:01.831 ********** 2026-04-05 02:20:44.959759 | orchestrator | changed: [testbed-manager] 2026-04-05 02:20:44.959770 | orchestrator | 2026-04-05 02:20:44.959796 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-05 02:20:44.959808 | orchestrator | Sunday 05 April 2026 02:20:36 +0000 (0:00:07.420) 0:00:09.251 ********** 2026-04-05 02:20:44.959819 | orchestrator | changed: [testbed-manager] 2026-04-05 02:20:44.959830 | orchestrator | 2026-04-05 02:20:44.959841 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-05 02:20:44.959852 | orchestrator | Sunday 05 April 2026 02:20:37 +0000 (0:00:00.562) 0:00:09.813 ********** 2026-04-05 02:20:44.959862 | orchestrator | changed: [testbed-manager] 2026-04-05 02:20:44.959873 | orchestrator | 2026-04-05 02:20:44.959884 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-05 02:20:44.959897 | orchestrator | Sunday 05 April 2026 02:20:38 +0000 (0:00:00.481) 0:00:10.295 ********** 2026-04-05 02:20:44.959909 | orchestrator | ok: [testbed-manager] 2026-04-05 02:20:44.959921 | orchestrator | 2026-04-05 02:20:44.959934 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-05 02:20:44.959947 | orchestrator | Sunday 05 April 2026 02:20:38 +0000 (0:00:00.721) 0:00:11.016 ********** 2026-04-05 02:20:44.959959 | orchestrator | ok: [testbed-manager] 2026-04-05 02:20:44.959971 | orchestrator | 2026-04-05 02:20:44.959985 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-05 02:20:44.959997 | orchestrator | Sunday 05 April 2026 02:20:39 +0000 (0:00:00.427) 0:00:11.444 ********** 2026-04-05 02:20:44.960009 | orchestrator | ok: [testbed-manager] 2026-04-05 02:20:44.960022 | orchestrator | 2026-04-05 02:20:44.960034 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-05 02:20:44.960047 | orchestrator | Sunday 05 April 2026 02:20:39 +0000 (0:00:00.440) 0:00:11.884 ********** 2026-04-05 02:20:44.960060 | orchestrator | changed: [testbed-manager] 2026-04-05 02:20:44.960072 | orchestrator | 2026-04-05 02:20:44.960084 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-05 02:20:44.960097 | orchestrator | Sunday 05 April 2026 02:20:40 +0000 (0:00:01.219) 0:00:13.104 ********** 2026-04-05 02:20:44.960109 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-05 02:20:44.960122 | orchestrator | changed: [testbed-manager] 2026-04-05 02:20:44.960134 | orchestrator | 2026-04-05 02:20:44.960147 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-05 02:20:44.960160 | orchestrator | Sunday 05 April 2026 02:20:41 +0000 (0:00:00.970) 0:00:14.075 ********** 2026-04-05 02:20:44.960172 | orchestrator | changed: [testbed-manager] 2026-04-05 02:20:44.960185 | orchestrator | 2026-04-05 02:20:44.960197 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-05 02:20:44.960210 | orchestrator | Sunday 05 April 2026 02:20:43 +0000 (0:00:01.818) 0:00:15.894 ********** 2026-04-05 02:20:44.960223 | orchestrator | changed: [testbed-manager] 2026-04-05 02:20:44.960235 | orchestrator | 2026-04-05 02:20:44.960248 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:20:44.960259 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:20:44.960271 | orchestrator | 2026-04-05 02:20:44.960282 | orchestrator | 2026-04-05 02:20:44.960292 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:20:44.960303 | orchestrator | Sunday 05 April 2026 02:20:44 +0000 (0:00:00.947) 0:00:16.841 ********** 2026-04-05 02:20:44.960322 | orchestrator | =============================================================================== 2026-04-05 02:20:44.960351 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.42s 2026-04-05 02:20:44.960363 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.82s 2026-04-05 02:20:44.960374 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.58s 2026-04-05 02:20:44.960384 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-04-05 02:20:44.960395 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2026-04-05 02:20:44.960406 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-04-05 02:20:44.960416 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.72s 2026-04-05 02:20:44.960427 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-04-05 02:20:44.960438 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2026-04-05 02:20:44.960449 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2026-04-05 02:20:44.960459 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-04-05 02:20:45.276614 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-05 02:20:45.312411 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-05 02:20:45.312503 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-05 02:20:45.463480 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 99 0 --:--:-- --:--:-- --:--:-- 100 2026-04-05 02:20:45.481606 | orchestrator | + osism apply --environment custom workarounds 2026-04-05 02:20:47.514874 | orchestrator | 2026-04-05 02:20:47 | INFO  | Trying to run play workarounds in environment custom 2026-04-05 02:20:57.632337 | orchestrator | 2026-04-05 02:20:57 | INFO  | Task f53e9e14-4620-42f7-aa10-ba976adb8679 (workarounds) was prepared for execution. 2026-04-05 02:20:57.632496 | orchestrator | 2026-04-05 02:20:57 | INFO  | It takes a moment until task f53e9e14-4620-42f7-aa10-ba976adb8679 (workarounds) has been started and output is visible here. 2026-04-05 02:21:24.036293 | orchestrator | 2026-04-05 02:21:24.036426 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:21:24.036441 | orchestrator | 2026-04-05 02:21:24.036449 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-05 02:21:24.036456 | orchestrator | Sunday 05 April 2026 02:21:01 +0000 (0:00:00.154) 0:00:00.154 ********** 2026-04-05 02:21:24.036463 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-05 02:21:24.036470 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-05 02:21:24.036476 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-05 02:21:24.036482 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-05 02:21:24.036489 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-05 02:21:24.036495 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-05 02:21:24.036501 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-05 02:21:24.036507 | orchestrator | 2026-04-05 02:21:24.036513 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-05 02:21:24.036519 | orchestrator | 2026-04-05 02:21:24.036525 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 02:21:24.036532 | orchestrator | Sunday 05 April 2026 02:21:02 +0000 (0:00:00.847) 0:00:01.001 ********** 2026-04-05 02:21:24.036538 | orchestrator | ok: [testbed-manager] 2026-04-05 02:21:24.036545 | orchestrator | 2026-04-05 02:21:24.036571 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-05 02:21:24.036577 | orchestrator | 2026-04-05 02:21:24.036584 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-05 02:21:24.036590 | orchestrator | Sunday 05 April 2026 02:21:05 +0000 (0:00:02.727) 0:00:03.729 ********** 2026-04-05 02:21:24.036597 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:21:24.036603 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:21:24.036609 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:21:24.036615 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:21:24.036621 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:21:24.036627 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:21:24.036633 | orchestrator | 2026-04-05 02:21:24.036640 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-05 02:21:24.036646 | orchestrator | 2026-04-05 02:21:24.036652 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-05 02:21:24.036671 | orchestrator | Sunday 05 April 2026 02:21:07 +0000 (0:00:01.886) 0:00:05.615 ********** 2026-04-05 02:21:24.036678 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 02:21:24.036685 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 02:21:24.036692 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 02:21:24.036698 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 02:21:24.036704 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 02:21:24.036710 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-05 02:21:24.036716 | orchestrator | 2026-04-05 02:21:24.036722 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-05 02:21:24.036729 | orchestrator | Sunday 05 April 2026 02:21:09 +0000 (0:00:01.605) 0:00:07.220 ********** 2026-04-05 02:21:24.036735 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:21:24.036741 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:21:24.036747 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:21:24.036753 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:21:24.036759 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:21:24.036765 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:21:24.036771 | orchestrator | 2026-04-05 02:21:24.036777 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-05 02:21:24.036783 | orchestrator | Sunday 05 April 2026 02:21:12 +0000 (0:00:03.785) 0:00:11.006 ********** 2026-04-05 02:21:24.036790 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:21:24.036796 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:21:24.036802 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:21:24.036808 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:21:24.036814 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:21:24.036820 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:21:24.036826 | orchestrator | 2026-04-05 02:21:24.036833 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-05 02:21:24.036839 | orchestrator | 2026-04-05 02:21:24.036847 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-05 02:21:24.036855 | orchestrator | Sunday 05 April 2026 02:21:13 +0000 (0:00:00.775) 0:00:11.781 ********** 2026-04-05 02:21:24.036862 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:21:24.036869 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:21:24.036876 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:21:24.036883 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:21:24.036890 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:21:24.036901 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:21:24.036912 | orchestrator | changed: [testbed-manager] 2026-04-05 02:21:24.036928 | orchestrator | 2026-04-05 02:21:24.036939 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-05 02:21:24.036950 | orchestrator | Sunday 05 April 2026 02:21:15 +0000 (0:00:01.666) 0:00:13.448 ********** 2026-04-05 02:21:24.036959 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:21:24.036969 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:21:24.036979 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:21:24.036989 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:21:24.037000 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:21:24.037009 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:21:24.037034 | orchestrator | changed: [testbed-manager] 2026-04-05 02:21:24.037045 | orchestrator | 2026-04-05 02:21:24.037055 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-05 02:21:24.037065 | orchestrator | Sunday 05 April 2026 02:21:16 +0000 (0:00:01.650) 0:00:15.098 ********** 2026-04-05 02:21:24.037076 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:21:24.037086 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:21:24.037098 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:21:24.037104 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:21:24.037110 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:21:24.037116 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:21:24.037122 | orchestrator | ok: [testbed-manager] 2026-04-05 02:21:24.037129 | orchestrator | 2026-04-05 02:21:24.037135 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-05 02:21:24.037141 | orchestrator | Sunday 05 April 2026 02:21:18 +0000 (0:00:01.656) 0:00:16.755 ********** 2026-04-05 02:21:24.037147 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:21:24.037153 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:21:24.037159 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:21:24.037165 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:21:24.037171 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:21:24.037177 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:21:24.037183 | orchestrator | changed: [testbed-manager] 2026-04-05 02:21:24.037189 | orchestrator | 2026-04-05 02:21:24.037195 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-05 02:21:24.037201 | orchestrator | Sunday 05 April 2026 02:21:20 +0000 (0:00:01.886) 0:00:18.641 ********** 2026-04-05 02:21:24.037207 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:21:24.037213 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:21:24.037219 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:21:24.037225 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:21:24.037232 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:21:24.037238 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:21:24.037244 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:21:24.037250 | orchestrator | 2026-04-05 02:21:24.037256 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-05 02:21:24.037262 | orchestrator | 2026-04-05 02:21:24.037268 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-05 02:21:24.037274 | orchestrator | Sunday 05 April 2026 02:21:21 +0000 (0:00:00.616) 0:00:19.258 ********** 2026-04-05 02:21:24.037280 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:21:24.037286 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:21:24.037292 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:21:24.037298 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:21:24.037305 | orchestrator | ok: [testbed-manager] 2026-04-05 02:21:24.037310 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:21:24.037322 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:21:24.037329 | orchestrator | 2026-04-05 02:21:24.037335 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:21:24.037343 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:21:24.037350 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:24.037380 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:24.037387 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:24.037393 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:24.037400 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:24.037406 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:24.037412 | orchestrator | 2026-04-05 02:21:24.037418 | orchestrator | 2026-04-05 02:21:24.037424 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:21:24.037430 | orchestrator | Sunday 05 April 2026 02:21:24 +0000 (0:00:02.962) 0:00:22.221 ********** 2026-04-05 02:21:24.037436 | orchestrator | =============================================================================== 2026-04-05 02:21:24.037443 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.79s 2026-04-05 02:21:24.037449 | orchestrator | Install python3-docker -------------------------------------------------- 2.96s 2026-04-05 02:21:24.037455 | orchestrator | Apply netplan configuration --------------------------------------------- 2.73s 2026-04-05 02:21:24.037461 | orchestrator | Apply netplan configuration --------------------------------------------- 1.89s 2026-04-05 02:21:24.037467 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.89s 2026-04-05 02:21:24.037473 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.67s 2026-04-05 02:21:24.037479 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.66s 2026-04-05 02:21:24.037485 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2026-04-05 02:21:24.037492 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.61s 2026-04-05 02:21:24.037498 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.85s 2026-04-05 02:21:24.037504 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.78s 2026-04-05 02:21:24.037515 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-04-05 02:21:24.748856 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-05 02:21:36.849648 | orchestrator | 2026-04-05 02:21:36 | INFO  | Task f40e691c-85ae-4e6c-b54c-2ff8049fcb70 (reboot) was prepared for execution. 2026-04-05 02:21:36.849740 | orchestrator | 2026-04-05 02:21:36 | INFO  | It takes a moment until task f40e691c-85ae-4e6c-b54c-2ff8049fcb70 (reboot) has been started and output is visible here. 2026-04-05 02:21:47.171947 | orchestrator | 2026-04-05 02:21:47.172058 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 02:21:47.172077 | orchestrator | 2026-04-05 02:21:47.172090 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 02:21:47.172127 | orchestrator | Sunday 05 April 2026 02:21:41 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-04-05 02:21:47.172142 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:21:47.172157 | orchestrator | 2026-04-05 02:21:47.172172 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 02:21:47.172185 | orchestrator | Sunday 05 April 2026 02:21:41 +0000 (0:00:00.106) 0:00:00.324 ********** 2026-04-05 02:21:47.172198 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:21:47.172211 | orchestrator | 2026-04-05 02:21:47.172224 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 02:21:47.172264 | orchestrator | Sunday 05 April 2026 02:21:42 +0000 (0:00:00.970) 0:00:01.295 ********** 2026-04-05 02:21:47.172276 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:21:47.172287 | orchestrator | 2026-04-05 02:21:47.172297 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 02:21:47.172309 | orchestrator | 2026-04-05 02:21:47.172322 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 02:21:47.172334 | orchestrator | Sunday 05 April 2026 02:21:42 +0000 (0:00:00.141) 0:00:01.436 ********** 2026-04-05 02:21:47.172344 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:21:47.172353 | orchestrator | 2026-04-05 02:21:47.172362 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 02:21:47.172423 | orchestrator | Sunday 05 April 2026 02:21:42 +0000 (0:00:00.102) 0:00:01.539 ********** 2026-04-05 02:21:47.172433 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:21:47.172442 | orchestrator | 2026-04-05 02:21:47.172452 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 02:21:47.172478 | orchestrator | Sunday 05 April 2026 02:21:43 +0000 (0:00:00.689) 0:00:02.228 ********** 2026-04-05 02:21:47.172493 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:21:47.172504 | orchestrator | 2026-04-05 02:21:47.172515 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 02:21:47.172527 | orchestrator | 2026-04-05 02:21:47.172537 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 02:21:47.172547 | orchestrator | Sunday 05 April 2026 02:21:43 +0000 (0:00:00.110) 0:00:02.338 ********** 2026-04-05 02:21:47.172556 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:21:47.172565 | orchestrator | 2026-04-05 02:21:47.172577 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 02:21:47.172587 | orchestrator | Sunday 05 April 2026 02:21:43 +0000 (0:00:00.232) 0:00:02.571 ********** 2026-04-05 02:21:47.172596 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:21:47.172605 | orchestrator | 2026-04-05 02:21:47.172616 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 02:21:47.172627 | orchestrator | Sunday 05 April 2026 02:21:44 +0000 (0:00:00.693) 0:00:03.265 ********** 2026-04-05 02:21:47.172638 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:21:47.172649 | orchestrator | 2026-04-05 02:21:47.172659 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 02:21:47.172671 | orchestrator | 2026-04-05 02:21:47.172683 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 02:21:47.172693 | orchestrator | Sunday 05 April 2026 02:21:44 +0000 (0:00:00.118) 0:00:03.383 ********** 2026-04-05 02:21:47.172702 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:21:47.172711 | orchestrator | 2026-04-05 02:21:47.172720 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 02:21:47.172730 | orchestrator | Sunday 05 April 2026 02:21:44 +0000 (0:00:00.125) 0:00:03.509 ********** 2026-04-05 02:21:47.172742 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:21:47.172752 | orchestrator | 2026-04-05 02:21:47.172761 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 02:21:47.172770 | orchestrator | Sunday 05 April 2026 02:21:45 +0000 (0:00:00.667) 0:00:04.176 ********** 2026-04-05 02:21:47.172779 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:21:47.172789 | orchestrator | 2026-04-05 02:21:47.172798 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 02:21:47.172807 | orchestrator | 2026-04-05 02:21:47.172817 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 02:21:47.172827 | orchestrator | Sunday 05 April 2026 02:21:45 +0000 (0:00:00.115) 0:00:04.292 ********** 2026-04-05 02:21:47.172836 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:21:47.172845 | orchestrator | 2026-04-05 02:21:47.172854 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 02:21:47.172863 | orchestrator | Sunday 05 April 2026 02:21:45 +0000 (0:00:00.116) 0:00:04.409 ********** 2026-04-05 02:21:47.172882 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:21:47.172891 | orchestrator | 2026-04-05 02:21:47.172900 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 02:21:47.172909 | orchestrator | Sunday 05 April 2026 02:21:45 +0000 (0:00:00.631) 0:00:05.040 ********** 2026-04-05 02:21:47.172918 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:21:47.172927 | orchestrator | 2026-04-05 02:21:47.172937 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-05 02:21:47.172946 | orchestrator | 2026-04-05 02:21:47.172955 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-05 02:21:47.172966 | orchestrator | Sunday 05 April 2026 02:21:46 +0000 (0:00:00.144) 0:00:05.184 ********** 2026-04-05 02:21:47.172977 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:21:47.172986 | orchestrator | 2026-04-05 02:21:47.172995 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-05 02:21:47.173005 | orchestrator | Sunday 05 April 2026 02:21:46 +0000 (0:00:00.118) 0:00:05.302 ********** 2026-04-05 02:21:47.173014 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:21:47.173023 | orchestrator | 2026-04-05 02:21:47.173032 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-05 02:21:47.173042 | orchestrator | Sunday 05 April 2026 02:21:46 +0000 (0:00:00.644) 0:00:05.947 ********** 2026-04-05 02:21:47.173068 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:21:47.173078 | orchestrator | 2026-04-05 02:21:47.173088 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:21:47.173099 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:47.173109 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:47.173118 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:47.173127 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:47.173137 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:47.173147 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:21:47.173157 | orchestrator | 2026-04-05 02:21:47.173166 | orchestrator | 2026-04-05 02:21:47.173175 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:21:47.173184 | orchestrator | Sunday 05 April 2026 02:21:46 +0000 (0:00:00.043) 0:00:05.990 ********** 2026-04-05 02:21:47.173209 | orchestrator | =============================================================================== 2026-04-05 02:21:47.173222 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.30s 2026-04-05 02:21:47.173231 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.80s 2026-04-05 02:21:47.173240 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2026-04-05 02:21:47.495263 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-05 02:21:59.570647 | orchestrator | 2026-04-05 02:21:59 | INFO  | Task b575a8e5-c93e-46bd-9677-6d66a38b5421 (wait-for-connection) was prepared for execution. 2026-04-05 02:21:59.570750 | orchestrator | 2026-04-05 02:21:59 | INFO  | It takes a moment until task b575a8e5-c93e-46bd-9677-6d66a38b5421 (wait-for-connection) has been started and output is visible here. 2026-04-05 02:22:15.952645 | orchestrator | 2026-04-05 02:22:15.952781 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-05 02:22:15.952829 | orchestrator | 2026-04-05 02:22:15.952844 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-05 02:22:15.952855 | orchestrator | Sunday 05 April 2026 02:22:03 +0000 (0:00:00.243) 0:00:00.243 ********** 2026-04-05 02:22:15.952867 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:22:15.952879 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:22:15.952895 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:22:15.952912 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:22:15.952928 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:22:15.952946 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:22:15.952963 | orchestrator | 2026-04-05 02:22:15.952981 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:22:15.953000 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:22:15.953020 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:22:15.953040 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:22:15.953058 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:22:15.953076 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:22:15.953092 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:22:15.953103 | orchestrator | 2026-04-05 02:22:15.953115 | orchestrator | 2026-04-05 02:22:15.953126 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:22:15.953137 | orchestrator | Sunday 05 April 2026 02:22:15 +0000 (0:00:11.672) 0:00:11.916 ********** 2026-04-05 02:22:15.953148 | orchestrator | =============================================================================== 2026-04-05 02:22:15.953160 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.67s 2026-04-05 02:22:16.296056 | orchestrator | + osism apply hddtemp 2026-04-05 02:22:28.471144 | orchestrator | 2026-04-05 02:22:28 | INFO  | Task 9ffdb73a-d484-4027-b49e-5471ec2f07a0 (hddtemp) was prepared for execution. 2026-04-05 02:22:28.471235 | orchestrator | 2026-04-05 02:22:28 | INFO  | It takes a moment until task 9ffdb73a-d484-4027-b49e-5471ec2f07a0 (hddtemp) has been started and output is visible here. 2026-04-05 02:23:10.683126 | orchestrator | 2026-04-05 02:23:10.683462 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-05 02:23:10.683509 | orchestrator | 2026-04-05 02:23:10.683530 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-05 02:23:10.683551 | orchestrator | Sunday 05 April 2026 02:22:32 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-04-05 02:23:10.683570 | orchestrator | ok: [testbed-manager] 2026-04-05 02:23:10.683584 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:23:10.683595 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:23:10.683606 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:23:10.683617 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:23:10.683628 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:23:10.683639 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:23:10.683650 | orchestrator | 2026-04-05 02:23:10.683661 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-05 02:23:10.683674 | orchestrator | Sunday 05 April 2026 02:22:33 +0000 (0:00:00.730) 0:00:00.985 ********** 2026-04-05 02:23:10.683690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:23:10.683731 | orchestrator | 2026-04-05 02:23:10.683746 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-05 02:23:10.683758 | orchestrator | Sunday 05 April 2026 02:22:34 +0000 (0:00:01.220) 0:00:02.206 ********** 2026-04-05 02:23:10.683772 | orchestrator | ok: [testbed-manager] 2026-04-05 02:23:10.683784 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:23:10.683797 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:23:10.683810 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:23:10.683823 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:23:10.683836 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:23:10.683848 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:23:10.683860 | orchestrator | 2026-04-05 02:23:10.683873 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-05 02:23:10.683898 | orchestrator | Sunday 05 April 2026 02:22:37 +0000 (0:00:02.987) 0:00:05.194 ********** 2026-04-05 02:23:10.683911 | orchestrator | changed: [testbed-manager] 2026-04-05 02:23:10.683925 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:23:10.683937 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:23:10.683950 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:23:10.683962 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:23:10.683975 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:23:10.683987 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:23:10.684000 | orchestrator | 2026-04-05 02:23:10.684017 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-05 02:23:10.684036 | orchestrator | Sunday 05 April 2026 02:22:38 +0000 (0:00:01.186) 0:00:06.381 ********** 2026-04-05 02:23:10.684055 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:23:10.684073 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:23:10.684089 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:23:10.684106 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:23:10.684125 | orchestrator | ok: [testbed-manager] 2026-04-05 02:23:10.684143 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:23:10.684162 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:23:10.684181 | orchestrator | 2026-04-05 02:23:10.684199 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-05 02:23:10.684217 | orchestrator | Sunday 05 April 2026 02:22:39 +0000 (0:00:01.495) 0:00:07.877 ********** 2026-04-05 02:23:10.684237 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:23:10.684254 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:23:10.684274 | orchestrator | changed: [testbed-manager] 2026-04-05 02:23:10.684285 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:23:10.684296 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:23:10.684311 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:23:10.684329 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:23:10.684347 | orchestrator | 2026-04-05 02:23:10.684364 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-05 02:23:10.684381 | orchestrator | Sunday 05 April 2026 02:22:40 +0000 (0:00:00.851) 0:00:08.728 ********** 2026-04-05 02:23:10.684398 | orchestrator | changed: [testbed-manager] 2026-04-05 02:23:10.684488 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:23:10.684509 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:23:10.684521 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:23:10.684532 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:23:10.684543 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:23:10.684554 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:23:10.684565 | orchestrator | 2026-04-05 02:23:10.684576 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-05 02:23:10.684587 | orchestrator | Sunday 05 April 2026 02:23:05 +0000 (0:00:24.799) 0:00:33.528 ********** 2026-04-05 02:23:10.684599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:23:10.684610 | orchestrator | 2026-04-05 02:23:10.684634 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-05 02:23:10.684645 | orchestrator | Sunday 05 April 2026 02:23:06 +0000 (0:00:01.285) 0:00:34.814 ********** 2026-04-05 02:23:10.684656 | orchestrator | changed: [testbed-manager] 2026-04-05 02:23:10.684667 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:23:10.684677 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:23:10.684688 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:23:10.684699 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:23:10.684710 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:23:10.684721 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:23:10.684732 | orchestrator | 2026-04-05 02:23:10.684743 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:23:10.684754 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:23:10.684788 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:23:10.684800 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:23:10.684812 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:23:10.684822 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:23:10.684833 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:23:10.684844 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:23:10.684855 | orchestrator | 2026-04-05 02:23:10.684865 | orchestrator | 2026-04-05 02:23:10.684876 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:23:10.684887 | orchestrator | Sunday 05 April 2026 02:23:10 +0000 (0:00:03.400) 0:00:38.215 ********** 2026-04-05 02:23:10.684898 | orchestrator | =============================================================================== 2026-04-05 02:23:10.684909 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 24.80s 2026-04-05 02:23:10.684919 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 3.40s 2026-04-05 02:23:10.684930 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.99s 2026-04-05 02:23:10.684949 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.50s 2026-04-05 02:23:10.684960 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2026-04-05 02:23:10.684971 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2026-04-05 02:23:10.684982 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2026-04-05 02:23:10.684993 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.85s 2026-04-05 02:23:10.685004 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-04-05 02:23:10.992872 | orchestrator | ++ semver 9.5.0 7.1.1 2026-04-05 02:23:11.042196 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 02:23:11.042293 | orchestrator | + sudo systemctl restart manager.service 2026-04-05 02:23:24.865698 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 02:23:24.865803 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-05 02:23:24.865820 | orchestrator | + local max_attempts=60 2026-04-05 02:23:24.865834 | orchestrator | + local name=ceph-ansible 2026-04-05 02:23:24.865845 | orchestrator | + local attempt_num=1 2026-04-05 02:23:24.865856 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:23:24.903857 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:23:24.903952 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:23:24.903968 | orchestrator | + sleep 5 2026-04-05 02:23:29.908988 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:23:29.948686 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:23:29.948800 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:23:29.948824 | orchestrator | + sleep 5 2026-04-05 02:23:34.951041 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:23:34.986627 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:23:34.986712 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:23:34.986728 | orchestrator | + sleep 5 2026-04-05 02:23:39.990856 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:23:40.033805 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:23:40.033891 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:23:40.033899 | orchestrator | + sleep 5 2026-04-05 02:23:45.039120 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:23:45.086466 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:23:45.086568 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:23:45.086584 | orchestrator | + sleep 5 2026-04-05 02:23:50.090654 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:23:50.128515 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:23:50.128622 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:23:50.128639 | orchestrator | + sleep 5 2026-04-05 02:23:55.132966 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:23:55.171760 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:23:55.171857 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:23:55.171871 | orchestrator | + sleep 5 2026-04-05 02:24:00.175118 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:24:00.217588 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:00.217687 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:24:00.217701 | orchestrator | + sleep 5 2026-04-05 02:24:05.221679 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:24:05.273888 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:05.273992 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:24:05.274008 | orchestrator | + sleep 5 2026-04-05 02:24:10.279289 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:24:10.322828 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:10.322914 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:24:10.322925 | orchestrator | + sleep 5 2026-04-05 02:24:15.327758 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:24:15.370217 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:15.370417 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:24:15.370498 | orchestrator | + sleep 5 2026-04-05 02:24:20.375098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:24:20.416177 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:20.416281 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:24:20.416305 | orchestrator | + sleep 5 2026-04-05 02:24:25.423129 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:24:25.461855 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:25.461951 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-05 02:24:25.461966 | orchestrator | + sleep 5 2026-04-05 02:24:30.465102 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-05 02:24:30.491668 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:30.491745 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 02:24:30.491754 | orchestrator | + local max_attempts=60 2026-04-05 02:24:30.491760 | orchestrator | + local name=kolla-ansible 2026-04-05 02:24:30.491766 | orchestrator | + local attempt_num=1 2026-04-05 02:24:30.492044 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 02:24:30.526217 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:30.526301 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 02:24:30.526313 | orchestrator | + local max_attempts=60 2026-04-05 02:24:30.526347 | orchestrator | + local name=osism-ansible 2026-04-05 02:24:30.526355 | orchestrator | + local attempt_num=1 2026-04-05 02:24:30.526580 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 02:24:30.561057 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 02:24:30.561147 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 02:24:30.561162 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-05 02:24:30.741673 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-05 02:24:30.898273 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-05 02:24:31.060650 | orchestrator | ARA in osism-ansible already disabled. 2026-04-05 02:24:31.219732 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-05 02:24:31.220461 | orchestrator | + osism apply gather-facts 2026-04-05 02:24:43.517063 | orchestrator | 2026-04-05 02:24:43 | INFO  | Task 7151d652-e1be-488f-8485-4587c41560c6 (gather-facts) was prepared for execution. 2026-04-05 02:24:43.517179 | orchestrator | 2026-04-05 02:24:43 | INFO  | It takes a moment until task 7151d652-e1be-488f-8485-4587c41560c6 (gather-facts) has been started and output is visible here. 2026-04-05 02:24:56.839730 | orchestrator | 2026-04-05 02:24:56.839838 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 02:24:56.839857 | orchestrator | 2026-04-05 02:24:56.839868 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 02:24:56.839879 | orchestrator | Sunday 05 April 2026 02:24:47 +0000 (0:00:00.215) 0:00:00.215 ********** 2026-04-05 02:24:56.839889 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:24:56.839901 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:24:56.839911 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:24:56.839921 | orchestrator | ok: [testbed-manager] 2026-04-05 02:24:56.839931 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:24:56.839941 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:24:56.839950 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:24:56.839960 | orchestrator | 2026-04-05 02:24:56.839970 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 02:24:56.839979 | orchestrator | 2026-04-05 02:24:56.839989 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 02:24:56.839999 | orchestrator | Sunday 05 April 2026 02:24:55 +0000 (0:00:08.003) 0:00:08.219 ********** 2026-04-05 02:24:56.840009 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:24:56.840019 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:24:56.840029 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:24:56.840039 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:24:56.840049 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:24:56.840058 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:24:56.840068 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:24:56.840078 | orchestrator | 2026-04-05 02:24:56.840088 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:24:56.840148 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:24:56.840161 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:24:56.840171 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:24:56.840181 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:24:56.840191 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:24:56.840201 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:24:56.840210 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 02:24:56.840245 | orchestrator | 2026-04-05 02:24:56.840255 | orchestrator | 2026-04-05 02:24:56.840265 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:24:56.840275 | orchestrator | Sunday 05 April 2026 02:24:56 +0000 (0:00:00.617) 0:00:08.836 ********** 2026-04-05 02:24:56.840285 | orchestrator | =============================================================================== 2026-04-05 02:24:56.840295 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.00s 2026-04-05 02:24:56.840305 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-04-05 02:24:57.219940 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-05 02:24:57.234635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-05 02:24:57.249986 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-05 02:24:57.263967 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-05 02:24:57.282226 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-05 02:24:57.298891 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-05 02:24:57.313942 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-05 02:24:57.328734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-05 02:24:57.347464 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-05 02:24:57.367080 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-05 02:24:57.384259 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-05 02:24:57.404538 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-05 02:24:57.423682 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-05 02:24:57.433457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-05 02:24:57.445795 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-05 02:24:57.458974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-05 02:24:57.471947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-05 02:24:57.484179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-05 02:24:57.497545 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-05 02:24:57.509076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-05 02:24:57.528239 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-05 02:24:57.545172 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-05 02:24:57.557924 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-05 02:24:57.571825 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-05 02:24:57.666962 | orchestrator | ok: Runtime: 0:25:35.470620 2026-04-05 02:24:57.761397 | 2026-04-05 02:24:57.761535 | TASK [Deploy services] 2026-04-05 02:24:58.491398 | orchestrator | 2026-04-05 02:24:58.491531 | orchestrator | # DEPLOY SERVICES 2026-04-05 02:24:58.491543 | orchestrator | 2026-04-05 02:24:58.491549 | orchestrator | + set -e 2026-04-05 02:24:58.491554 | orchestrator | + echo 2026-04-05 02:24:58.491559 | orchestrator | + echo '# DEPLOY SERVICES' 2026-04-05 02:24:58.491565 | orchestrator | + echo 2026-04-05 02:24:58.491586 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 02:24:58.491595 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 02:24:58.491601 | orchestrator | ++ INTERACTIVE=false 2026-04-05 02:24:58.491606 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 02:24:58.491615 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 02:24:58.491619 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 02:24:58.491625 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 02:24:58.491629 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 02:24:58.491635 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 02:24:58.491639 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 02:24:58.491645 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 02:24:58.491649 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 02:24:58.491655 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 02:24:58.491659 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 02:24:58.491680 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 02:24:58.491688 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 02:24:58.491695 | orchestrator | ++ export ARA=false 2026-04-05 02:24:58.491701 | orchestrator | ++ ARA=false 2026-04-05 02:24:58.491708 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 02:24:58.491715 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 02:24:58.491731 | orchestrator | ++ export TEMPEST=false 2026-04-05 02:24:58.491738 | orchestrator | ++ TEMPEST=false 2026-04-05 02:24:58.491744 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 02:24:58.491750 | orchestrator | ++ IS_ZUUL=true 2026-04-05 02:24:58.491754 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 02:24:58.491758 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 02:24:58.491762 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 02:24:58.491765 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 02:24:58.491769 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 02:24:58.491773 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 02:24:58.491777 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 02:24:58.491780 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 02:24:58.491784 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 02:24:58.491792 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 02:24:58.491796 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-05 02:24:58.503921 | orchestrator | + set -e 2026-04-05 02:24:58.503990 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 02:24:58.504002 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 02:24:58.504009 | orchestrator | ++ INTERACTIVE=false 2026-04-05 02:24:58.504015 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 02:24:58.504023 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 02:24:58.504030 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 02:24:58.504036 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 02:24:58.504043 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 02:24:58.504058 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 02:24:58.504064 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 02:24:58.504077 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 02:24:58.504082 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 02:24:58.504085 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 02:24:58.504090 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 02:24:58.504093 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 02:24:58.504097 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 02:24:58.504102 | orchestrator | ++ export ARA=false 2026-04-05 02:24:58.504106 | orchestrator | ++ ARA=false 2026-04-05 02:24:58.504109 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 02:24:58.504113 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 02:24:58.504117 | orchestrator | ++ export TEMPEST=false 2026-04-05 02:24:58.504137 | orchestrator | ++ TEMPEST=false 2026-04-05 02:24:58.504144 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 02:24:58.504149 | orchestrator | ++ IS_ZUUL=true 2026-04-05 02:24:58.504163 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 02:24:58.504170 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 02:24:58.504192 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 02:24:58.504249 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 02:24:58.504352 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 02:24:58.504417 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 02:24:58.504550 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 02:24:58.504586 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 02:24:58.505664 | orchestrator | 2026-04-05 02:24:58.505688 | orchestrator | # PULL IMAGES 2026-04-05 02:24:58.505693 | orchestrator | 2026-04-05 02:24:58.505698 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 02:24:58.505703 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 02:24:58.505707 | orchestrator | + echo 2026-04-05 02:24:58.505711 | orchestrator | + echo '# PULL IMAGES' 2026-04-05 02:24:58.505715 | orchestrator | + echo 2026-04-05 02:24:58.506425 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-05 02:24:58.562252 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 02:24:58.562338 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-05 02:25:00.587324 | orchestrator | 2026-04-05 02:25:00 | INFO  | Trying to run play pull-images in environment custom 2026-04-05 02:25:10.737248 | orchestrator | 2026-04-05 02:25:10 | INFO  | Task c2c123b9-7515-4c16-a432-3c9a07cf3632 (pull-images) was prepared for execution. 2026-04-05 02:25:10.737365 | orchestrator | 2026-04-05 02:25:10 | INFO  | Task c2c123b9-7515-4c16-a432-3c9a07cf3632 is running in background. No more output. Check ARA for logs. 2026-04-05 02:25:11.076826 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-04-05 02:25:23.332296 | orchestrator | 2026-04-05 02:25:23 | INFO  | Task 7b22a314-655b-411a-a9d8-ba1b1075b3c9 (cgit) was prepared for execution. 2026-04-05 02:25:23.332431 | orchestrator | 2026-04-05 02:25:23 | INFO  | Task 7b22a314-655b-411a-a9d8-ba1b1075b3c9 is running in background. No more output. Check ARA for logs. 2026-04-05 02:25:36.003863 | orchestrator | 2026-04-05 02:25:35 | INFO  | Task 669ca995-4b5c-48d6-970c-9af8e366466f (dotfiles) was prepared for execution. 2026-04-05 02:25:36.003964 | orchestrator | 2026-04-05 02:25:35 | INFO  | Task 669ca995-4b5c-48d6-970c-9af8e366466f is running in background. No more output. Check ARA for logs. 2026-04-05 02:25:48.717131 | orchestrator | 2026-04-05 02:25:48 | INFO  | Task af97049c-700f-43dc-b797-32df14f12d54 (homer) was prepared for execution. 2026-04-05 02:25:48.717231 | orchestrator | 2026-04-05 02:25:48 | INFO  | Task af97049c-700f-43dc-b797-32df14f12d54 is running in background. No more output. Check ARA for logs. 2026-04-05 02:26:01.228877 | orchestrator | 2026-04-05 02:26:01 | INFO  | Task c29ac8b0-57bc-46e7-b014-044f02c846d2 (phpmyadmin) was prepared for execution. 2026-04-05 02:26:01.228998 | orchestrator | 2026-04-05 02:26:01 | INFO  | Task c29ac8b0-57bc-46e7-b014-044f02c846d2 is running in background. No more output. Check ARA for logs. 2026-04-05 02:26:13.949248 | orchestrator | 2026-04-05 02:26:13 | INFO  | Task d253204a-b02f-45d5-9fbd-4a75d5b49f1a (sosreport) was prepared for execution. 2026-04-05 02:26:13.949705 | orchestrator | 2026-04-05 02:26:13 | INFO  | Task d253204a-b02f-45d5-9fbd-4a75d5b49f1a is running in background. No more output. Check ARA for logs. 2026-04-05 02:26:14.270621 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-04-05 02:26:14.280942 | orchestrator | + set -e 2026-04-05 02:26:14.281025 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 02:26:14.281041 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 02:26:14.281053 | orchestrator | ++ INTERACTIVE=false 2026-04-05 02:26:14.281067 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 02:26:14.281079 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 02:26:14.281090 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 02:26:14.281101 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 02:26:14.281111 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 02:26:14.281122 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 02:26:14.281133 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 02:26:14.281145 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 02:26:14.281156 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 02:26:14.281166 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 02:26:14.281177 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 02:26:14.281188 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 02:26:14.281199 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 02:26:14.281210 | orchestrator | ++ export ARA=false 2026-04-05 02:26:14.281221 | orchestrator | ++ ARA=false 2026-04-05 02:26:14.281232 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 02:26:14.281270 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 02:26:14.281281 | orchestrator | ++ export TEMPEST=false 2026-04-05 02:26:14.281292 | orchestrator | ++ TEMPEST=false 2026-04-05 02:26:14.281303 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 02:26:14.281313 | orchestrator | ++ IS_ZUUL=true 2026-04-05 02:26:14.281340 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 02:26:14.281357 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 02:26:14.281369 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 02:26:14.281380 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 02:26:14.281391 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 02:26:14.281402 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 02:26:14.281413 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 02:26:14.281424 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 02:26:14.281435 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 02:26:14.281447 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 02:26:14.281731 | orchestrator | ++ semver 9.5.0 8.0.3 2026-04-05 02:26:14.344970 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 02:26:14.345065 | orchestrator | + osism apply frr 2026-04-05 02:26:26.817014 | orchestrator | 2026-04-05 02:26:26 | INFO  | Task 582fc1d0-d7f1-4fc1-81f7-42d6658785a7 (frr) was prepared for execution. 2026-04-05 02:26:26.817484 | orchestrator | 2026-04-05 02:26:26 | INFO  | It takes a moment until task 582fc1d0-d7f1-4fc1-81f7-42d6658785a7 (frr) has been started and output is visible here. 2026-04-05 02:27:04.923098 | orchestrator | 2026-04-05 02:27:04.923214 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-05 02:27:04.923232 | orchestrator | 2026-04-05 02:27:04.923246 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-05 02:27:04.923264 | orchestrator | Sunday 05 April 2026 02:26:34 +0000 (0:00:00.310) 0:00:00.310 ********** 2026-04-05 02:27:04.923277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 02:27:04.923305 | orchestrator | 2026-04-05 02:27:04.923327 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-05 02:27:04.923338 | orchestrator | Sunday 05 April 2026 02:26:35 +0000 (0:00:00.458) 0:00:00.768 ********** 2026-04-05 02:27:04.923349 | orchestrator | changed: [testbed-manager] 2026-04-05 02:27:04.923361 | orchestrator | 2026-04-05 02:27:04.923372 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-05 02:27:04.923386 | orchestrator | Sunday 05 April 2026 02:26:37 +0000 (0:00:02.351) 0:00:03.119 ********** 2026-04-05 02:27:04.923397 | orchestrator | changed: [testbed-manager] 2026-04-05 02:27:04.923408 | orchestrator | 2026-04-05 02:27:04.923419 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-05 02:27:04.923430 | orchestrator | Sunday 05 April 2026 02:26:52 +0000 (0:00:14.318) 0:00:17.438 ********** 2026-04-05 02:27:04.923441 | orchestrator | ok: [testbed-manager] 2026-04-05 02:27:04.923453 | orchestrator | 2026-04-05 02:27:04.923463 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-05 02:27:04.923474 | orchestrator | Sunday 05 April 2026 02:26:53 +0000 (0:00:01.140) 0:00:18.579 ********** 2026-04-05 02:27:04.923485 | orchestrator | changed: [testbed-manager] 2026-04-05 02:27:04.923495 | orchestrator | 2026-04-05 02:27:04.923506 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-05 02:27:04.923517 | orchestrator | Sunday 05 April 2026 02:26:55 +0000 (0:00:01.849) 0:00:20.428 ********** 2026-04-05 02:27:04.923528 | orchestrator | ok: [testbed-manager] 2026-04-05 02:27:04.923568 | orchestrator | 2026-04-05 02:27:04.923580 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-05 02:27:04.923593 | orchestrator | Sunday 05 April 2026 02:26:56 +0000 (0:00:01.361) 0:00:21.790 ********** 2026-04-05 02:27:04.923604 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:27:04.923615 | orchestrator | 2026-04-05 02:27:04.923628 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-05 02:27:04.923641 | orchestrator | Sunday 05 April 2026 02:26:56 +0000 (0:00:00.153) 0:00:21.944 ********** 2026-04-05 02:27:04.923677 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:27:04.923692 | orchestrator | 2026-04-05 02:27:04.923705 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-05 02:27:04.923719 | orchestrator | Sunday 05 April 2026 02:26:56 +0000 (0:00:00.203) 0:00:22.147 ********** 2026-04-05 02:27:04.923732 | orchestrator | changed: [testbed-manager] 2026-04-05 02:27:04.923746 | orchestrator | 2026-04-05 02:27:04.923759 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-05 02:27:04.923771 | orchestrator | Sunday 05 April 2026 02:26:58 +0000 (0:00:01.382) 0:00:23.530 ********** 2026-04-05 02:27:04.923781 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-05 02:27:04.923792 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-05 02:27:04.923805 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-05 02:27:04.923815 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-05 02:27:04.923826 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-05 02:27:04.923837 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-05 02:27:04.923848 | orchestrator | 2026-04-05 02:27:04.923859 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-05 02:27:04.923869 | orchestrator | Sunday 05 April 2026 02:27:00 +0000 (0:00:02.767) 0:00:26.298 ********** 2026-04-05 02:27:04.923880 | orchestrator | ok: [testbed-manager] 2026-04-05 02:27:04.923891 | orchestrator | 2026-04-05 02:27:04.923901 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-05 02:27:04.923912 | orchestrator | Sunday 05 April 2026 02:27:02 +0000 (0:00:01.846) 0:00:28.144 ********** 2026-04-05 02:27:04.923923 | orchestrator | changed: [testbed-manager] 2026-04-05 02:27:04.923933 | orchestrator | 2026-04-05 02:27:04.923944 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:27:04.923955 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:27:04.923966 | orchestrator | 2026-04-05 02:27:04.923977 | orchestrator | 2026-04-05 02:27:04.923994 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:27:04.924005 | orchestrator | Sunday 05 April 2026 02:27:04 +0000 (0:00:01.610) 0:00:29.755 ********** 2026-04-05 02:27:04.924016 | orchestrator | =============================================================================== 2026-04-05 02:27:04.924027 | orchestrator | osism.services.frr : Install frr package ------------------------------- 14.32s 2026-04-05 02:27:04.924037 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.77s 2026-04-05 02:27:04.924048 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.35s 2026-04-05 02:27:04.924058 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.85s 2026-04-05 02:27:04.924069 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.85s 2026-04-05 02:27:04.924098 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.61s 2026-04-05 02:27:04.924110 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.38s 2026-04-05 02:27:04.924120 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.36s 2026-04-05 02:27:04.924131 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.14s 2026-04-05 02:27:04.924146 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.46s 2026-04-05 02:27:04.924164 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.20s 2026-04-05 02:27:04.924183 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-04-05 02:27:05.515022 | orchestrator | + osism apply kubernetes 2026-04-05 02:27:07.827630 | orchestrator | 2026-04-05 02:27:07 | INFO  | Task ad7ef460-7c74-4354-b56b-271d1a32cd02 (kubernetes) was prepared for execution. 2026-04-05 02:27:07.827703 | orchestrator | 2026-04-05 02:27:07 | INFO  | It takes a moment until task ad7ef460-7c74-4354-b56b-271d1a32cd02 (kubernetes) has been started and output is visible here. 2026-04-05 02:27:34.064172 | orchestrator | 2026-04-05 02:27:34.064310 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-05 02:27:34.064332 | orchestrator | 2026-04-05 02:27:34.065228 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-05 02:27:34.065306 | orchestrator | Sunday 05 April 2026 02:27:12 +0000 (0:00:00.206) 0:00:00.206 ********** 2026-04-05 02:27:34.065322 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:27:34.065335 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:27:34.065346 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:27:34.065357 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:27:34.065368 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:27:34.065398 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:27:34.065410 | orchestrator | 2026-04-05 02:27:34.065433 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-05 02:27:34.065444 | orchestrator | Sunday 05 April 2026 02:27:13 +0000 (0:00:00.877) 0:00:01.084 ********** 2026-04-05 02:27:34.065456 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.065468 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.065479 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.065490 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.065500 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.065511 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.065522 | orchestrator | 2026-04-05 02:27:34.065533 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-05 02:27:34.065612 | orchestrator | Sunday 05 April 2026 02:27:14 +0000 (0:00:00.691) 0:00:01.776 ********** 2026-04-05 02:27:34.065627 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.065638 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.065649 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.065660 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.065671 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.065682 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.065693 | orchestrator | 2026-04-05 02:27:34.065704 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-05 02:27:34.065716 | orchestrator | Sunday 05 April 2026 02:27:15 +0000 (0:00:00.931) 0:00:02.707 ********** 2026-04-05 02:27:34.065727 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:27:34.065738 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:27:34.065748 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:27:34.065763 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:27:34.065774 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:27:34.065785 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:27:34.065796 | orchestrator | 2026-04-05 02:27:34.065807 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-05 02:27:34.065818 | orchestrator | Sunday 05 April 2026 02:27:17 +0000 (0:00:02.089) 0:00:04.797 ********** 2026-04-05 02:27:34.065830 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:27:34.065841 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:27:34.065851 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:27:34.065862 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:27:34.065873 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:27:34.065884 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:27:34.065895 | orchestrator | 2026-04-05 02:27:34.065906 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-05 02:27:34.065917 | orchestrator | Sunday 05 April 2026 02:27:19 +0000 (0:00:01.878) 0:00:06.675 ********** 2026-04-05 02:27:34.065928 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:27:34.065969 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:27:34.065981 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:27:34.065992 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:27:34.066002 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:27:34.066076 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:27:34.066090 | orchestrator | 2026-04-05 02:27:34.066114 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-05 02:27:34.066129 | orchestrator | Sunday 05 April 2026 02:27:21 +0000 (0:00:01.893) 0:00:08.568 ********** 2026-04-05 02:27:34.066149 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.066174 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.066198 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.066216 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.066235 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.066253 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.066273 | orchestrator | 2026-04-05 02:27:34.066293 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-05 02:27:34.066312 | orchestrator | Sunday 05 April 2026 02:27:21 +0000 (0:00:00.711) 0:00:09.280 ********** 2026-04-05 02:27:34.066331 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.066342 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.066353 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.066364 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.066375 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.066385 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.066396 | orchestrator | 2026-04-05 02:27:34.066407 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-05 02:27:34.066418 | orchestrator | Sunday 05 April 2026 02:27:22 +0000 (0:00:01.043) 0:00:10.324 ********** 2026-04-05 02:27:34.066429 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 02:27:34.066440 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 02:27:34.066450 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.066462 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 02:27:34.066472 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 02:27:34.066483 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.066494 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 02:27:34.066505 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 02:27:34.066515 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.066526 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 02:27:34.066599 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 02:27:34.066613 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.066625 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 02:27:34.066636 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 02:27:34.066646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.066657 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 02:27:34.066668 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 02:27:34.066679 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.066690 | orchestrator | 2026-04-05 02:27:34.066700 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-05 02:27:34.066711 | orchestrator | Sunday 05 April 2026 02:27:23 +0000 (0:00:00.743) 0:00:11.068 ********** 2026-04-05 02:27:34.066722 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.066733 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.066755 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.066780 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.066791 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.066801 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.066812 | orchestrator | 2026-04-05 02:27:34.066823 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-05 02:27:34.066835 | orchestrator | Sunday 05 April 2026 02:27:24 +0000 (0:00:01.250) 0:00:12.318 ********** 2026-04-05 02:27:34.066846 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:27:34.066857 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:27:34.066867 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:27:34.066878 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:27:34.066889 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:27:34.066900 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:27:34.066911 | orchestrator | 2026-04-05 02:27:34.066922 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-05 02:27:34.066933 | orchestrator | Sunday 05 April 2026 02:27:25 +0000 (0:00:00.755) 0:00:13.073 ********** 2026-04-05 02:27:34.066944 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:27:34.066954 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:27:34.066965 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:27:34.066977 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:27:34.066987 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:27:34.066998 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:27:34.067008 | orchestrator | 2026-04-05 02:27:34.067019 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-05 02:27:34.067030 | orchestrator | Sunday 05 April 2026 02:27:30 +0000 (0:00:04.727) 0:00:17.801 ********** 2026-04-05 02:27:34.067041 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.067059 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.067070 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.067081 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.067092 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.067102 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.067113 | orchestrator | 2026-04-05 02:27:34.067124 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-05 02:27:34.067134 | orchestrator | Sunday 05 April 2026 02:27:31 +0000 (0:00:00.875) 0:00:18.676 ********** 2026-04-05 02:27:34.067145 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.067156 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.067168 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.067187 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.067205 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.067224 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.067244 | orchestrator | 2026-04-05 02:27:34.067263 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-05 02:27:34.067279 | orchestrator | Sunday 05 April 2026 02:27:32 +0000 (0:00:01.240) 0:00:19.917 ********** 2026-04-05 02:27:34.067290 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.067301 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.067311 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.067322 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.067333 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.067344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.067354 | orchestrator | 2026-04-05 02:27:34.067366 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-05 02:27:34.067376 | orchestrator | Sunday 05 April 2026 02:27:33 +0000 (0:00:00.642) 0:00:20.559 ********** 2026-04-05 02:27:34.067387 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-05 02:27:34.067404 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-05 02:27:34.067416 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:27:34.067426 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-05 02:27:34.067448 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-05 02:27:34.067460 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:27:34.067470 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-05 02:27:34.067481 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-05 02:27:34.067492 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:27:34.067503 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-05 02:27:34.067514 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-05 02:27:34.067525 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:27:34.067536 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-05 02:27:34.067578 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-05 02:27:34.067590 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:27:34.067602 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-05 02:27:34.067613 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-05 02:27:34.067624 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:27:34.067635 | orchestrator | 2026-04-05 02:27:34.067646 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-05 02:27:34.067665 | orchestrator | Sunday 05 April 2026 02:27:34 +0000 (0:00:00.874) 0:00:21.433 ********** 2026-04-05 02:28:48.972419 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:28:48.972542 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:28:48.972559 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:28:48.972632 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:28:48.972644 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.972658 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.972678 | orchestrator | 2026-04-05 02:28:48.972701 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-05 02:28:48.972731 | orchestrator | Sunday 05 April 2026 02:27:34 +0000 (0:00:00.618) 0:00:22.052 ********** 2026-04-05 02:28:48.972750 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:28:48.972769 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:28:48.972785 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:28:48.972802 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:28:48.972821 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.972838 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.972855 | orchestrator | 2026-04-05 02:28:48.972874 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-05 02:28:48.972893 | orchestrator | 2026-04-05 02:28:48.972910 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-05 02:28:48.972930 | orchestrator | Sunday 05 April 2026 02:27:35 +0000 (0:00:01.324) 0:00:23.376 ********** 2026-04-05 02:28:48.972949 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.972968 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.972986 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.973004 | orchestrator | 2026-04-05 02:28:48.973022 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-05 02:28:48.973039 | orchestrator | Sunday 05 April 2026 02:27:37 +0000 (0:00:01.427) 0:00:24.804 ********** 2026-04-05 02:28:48.973058 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.973076 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.973094 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.973112 | orchestrator | 2026-04-05 02:28:48.973128 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-05 02:28:48.973147 | orchestrator | Sunday 05 April 2026 02:27:39 +0000 (0:00:01.743) 0:00:26.548 ********** 2026-04-05 02:28:48.973165 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.973183 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.973201 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.973219 | orchestrator | 2026-04-05 02:28:48.973240 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-05 02:28:48.973293 | orchestrator | Sunday 05 April 2026 02:27:40 +0000 (0:00:00.926) 0:00:27.475 ********** 2026-04-05 02:28:48.973312 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.973333 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.973352 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.973369 | orchestrator | 2026-04-05 02:28:48.973387 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-05 02:28:48.973407 | orchestrator | Sunday 05 April 2026 02:27:40 +0000 (0:00:00.796) 0:00:28.271 ********** 2026-04-05 02:28:48.973425 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:28:48.973444 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.973457 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.973468 | orchestrator | 2026-04-05 02:28:48.973479 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-05 02:28:48.973511 | orchestrator | Sunday 05 April 2026 02:27:41 +0000 (0:00:00.344) 0:00:28.616 ********** 2026-04-05 02:28:48.973522 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:28:48.973533 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:28:48.973544 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:28:48.973555 | orchestrator | 2026-04-05 02:28:48.973595 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-05 02:28:48.973607 | orchestrator | Sunday 05 April 2026 02:27:42 +0000 (0:00:01.049) 0:00:29.666 ********** 2026-04-05 02:28:48.973618 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:28:48.973629 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:28:48.973640 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:28:48.973650 | orchestrator | 2026-04-05 02:28:48.973661 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-05 02:28:48.973672 | orchestrator | Sunday 05 April 2026 02:27:43 +0000 (0:00:01.464) 0:00:31.131 ********** 2026-04-05 02:28:48.973683 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:28:48.973694 | orchestrator | 2026-04-05 02:28:48.973705 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-05 02:28:48.973716 | orchestrator | Sunday 05 April 2026 02:27:44 +0000 (0:00:00.495) 0:00:31.626 ********** 2026-04-05 02:28:48.973727 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.973737 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.973748 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.973759 | orchestrator | 2026-04-05 02:28:48.973769 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-05 02:28:48.973780 | orchestrator | Sunday 05 April 2026 02:27:45 +0000 (0:00:01.716) 0:00:33.343 ********** 2026-04-05 02:28:48.973801 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.973820 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.973837 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:28:48.973853 | orchestrator | 2026-04-05 02:28:48.973870 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-05 02:28:48.973886 | orchestrator | Sunday 05 April 2026 02:27:46 +0000 (0:00:00.686) 0:00:34.029 ********** 2026-04-05 02:28:48.973905 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.973922 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.973940 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:28:48.973956 | orchestrator | 2026-04-05 02:28:48.973975 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-05 02:28:48.973993 | orchestrator | Sunday 05 April 2026 02:27:47 +0000 (0:00:00.736) 0:00:34.765 ********** 2026-04-05 02:28:48.974011 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.974113 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.974125 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:28:48.974136 | orchestrator | 2026-04-05 02:28:48.974148 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-05 02:28:48.974185 | orchestrator | Sunday 05 April 2026 02:27:48 +0000 (0:00:01.199) 0:00:35.965 ********** 2026-04-05 02:28:48.974197 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:28:48.974223 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.974234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.974245 | orchestrator | 2026-04-05 02:28:48.974256 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-05 02:28:48.974266 | orchestrator | Sunday 05 April 2026 02:27:48 +0000 (0:00:00.300) 0:00:36.266 ********** 2026-04-05 02:28:48.974277 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:28:48.974288 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.974299 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.974310 | orchestrator | 2026-04-05 02:28:48.974320 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-05 02:28:48.974331 | orchestrator | Sunday 05 April 2026 02:27:49 +0000 (0:00:00.554) 0:00:36.820 ********** 2026-04-05 02:28:48.974342 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:28:48.974352 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:28:48.974363 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:28:48.974374 | orchestrator | 2026-04-05 02:28:48.974394 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-05 02:28:48.974405 | orchestrator | Sunday 05 April 2026 02:27:50 +0000 (0:00:01.089) 0:00:37.909 ********** 2026-04-05 02:28:48.974416 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.974427 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.974437 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.974448 | orchestrator | 2026-04-05 02:28:48.974459 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-05 02:28:48.974469 | orchestrator | Sunday 05 April 2026 02:27:53 +0000 (0:00:02.918) 0:00:40.828 ********** 2026-04-05 02:28:48.974480 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.974491 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.974502 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.974517 | orchestrator | 2026-04-05 02:28:48.974528 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-05 02:28:48.974540 | orchestrator | Sunday 05 April 2026 02:27:53 +0000 (0:00:00.370) 0:00:41.198 ********** 2026-04-05 02:28:48.974551 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 02:28:48.974591 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 02:28:48.974603 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 02:28:48.974615 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 02:28:48.974626 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 02:28:48.974637 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 02:28:48.974648 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 02:28:48.974658 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 02:28:48.974669 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 02:28:48.974680 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 02:28:48.974690 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 02:28:48.974709 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-05 02:28:48.974720 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 02:28:48.974733 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 02:28:48.974752 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-05 02:28:48.974770 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:28:48.974789 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:28:48.974807 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:28:48.974827 | orchestrator | 2026-04-05 02:28:48.974854 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-05 02:28:48.974873 | orchestrator | Sunday 05 April 2026 02:28:47 +0000 (0:00:53.868) 0:01:35.066 ********** 2026-04-05 02:28:48.974891 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:28:48.974902 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:28:48.974913 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:28:48.974924 | orchestrator | 2026-04-05 02:28:48.974935 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-05 02:28:48.974945 | orchestrator | Sunday 05 April 2026 02:28:47 +0000 (0:00:00.304) 0:01:35.371 ********** 2026-04-05 02:28:48.974966 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035451 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035529 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035536 | orchestrator | 2026-04-05 02:29:31.035541 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-05 02:29:31.035547 | orchestrator | Sunday 05 April 2026 02:28:48 +0000 (0:00:00.981) 0:01:36.352 ********** 2026-04-05 02:29:31.035551 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035555 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035559 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035563 | orchestrator | 2026-04-05 02:29:31.035567 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-05 02:29:31.035571 | orchestrator | Sunday 05 April 2026 02:28:50 +0000 (0:00:01.292) 0:01:37.645 ********** 2026-04-05 02:29:31.035574 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035614 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035619 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035623 | orchestrator | 2026-04-05 02:29:31.035627 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-05 02:29:31.035631 | orchestrator | Sunday 05 April 2026 02:29:15 +0000 (0:00:24.921) 0:02:02.566 ********** 2026-04-05 02:29:31.035635 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:29:31.035640 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:29:31.035643 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:29:31.035647 | orchestrator | 2026-04-05 02:29:31.035652 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-05 02:29:31.035655 | orchestrator | Sunday 05 April 2026 02:29:15 +0000 (0:00:00.752) 0:02:03.318 ********** 2026-04-05 02:29:31.035660 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:29:31.035664 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:29:31.035668 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:29:31.035671 | orchestrator | 2026-04-05 02:29:31.035675 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-05 02:29:31.035679 | orchestrator | Sunday 05 April 2026 02:29:16 +0000 (0:00:00.686) 0:02:04.005 ********** 2026-04-05 02:29:31.035683 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035687 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035690 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035694 | orchestrator | 2026-04-05 02:29:31.035698 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-05 02:29:31.035716 | orchestrator | Sunday 05 April 2026 02:29:17 +0000 (0:00:00.648) 0:02:04.653 ********** 2026-04-05 02:29:31.035720 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:29:31.035724 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:29:31.035728 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:29:31.035731 | orchestrator | 2026-04-05 02:29:31.035735 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-05 02:29:31.035739 | orchestrator | Sunday 05 April 2026 02:29:18 +0000 (0:00:00.908) 0:02:05.561 ********** 2026-04-05 02:29:31.035742 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:29:31.035746 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:29:31.035750 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:29:31.035753 | orchestrator | 2026-04-05 02:29:31.035757 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-05 02:29:31.035761 | orchestrator | Sunday 05 April 2026 02:29:18 +0000 (0:00:00.318) 0:02:05.880 ********** 2026-04-05 02:29:31.035765 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035768 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035772 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035776 | orchestrator | 2026-04-05 02:29:31.035779 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-05 02:29:31.035783 | orchestrator | Sunday 05 April 2026 02:29:19 +0000 (0:00:00.702) 0:02:06.583 ********** 2026-04-05 02:29:31.035787 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035791 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035795 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035799 | orchestrator | 2026-04-05 02:29:31.035802 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-05 02:29:31.035806 | orchestrator | Sunday 05 April 2026 02:29:19 +0000 (0:00:00.626) 0:02:07.210 ********** 2026-04-05 02:29:31.035810 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035814 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035817 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035821 | orchestrator | 2026-04-05 02:29:31.035826 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-05 02:29:31.035829 | orchestrator | Sunday 05 April 2026 02:29:20 +0000 (0:00:00.904) 0:02:08.114 ********** 2026-04-05 02:29:31.035836 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:29:31.035840 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:29:31.035843 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:29:31.035847 | orchestrator | 2026-04-05 02:29:31.035851 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-05 02:29:31.035854 | orchestrator | Sunday 05 April 2026 02:29:21 +0000 (0:00:01.075) 0:02:09.190 ********** 2026-04-05 02:29:31.035858 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:29:31.035862 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:29:31.035866 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:29:31.035869 | orchestrator | 2026-04-05 02:29:31.035873 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-05 02:29:31.035877 | orchestrator | Sunday 05 April 2026 02:29:22 +0000 (0:00:00.298) 0:02:09.489 ********** 2026-04-05 02:29:31.035880 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:29:31.035884 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:29:31.035888 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:29:31.035891 | orchestrator | 2026-04-05 02:29:31.035895 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-05 02:29:31.035899 | orchestrator | Sunday 05 April 2026 02:29:22 +0000 (0:00:00.308) 0:02:09.797 ********** 2026-04-05 02:29:31.035903 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:29:31.035906 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:29:31.035910 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:29:31.035916 | orchestrator | 2026-04-05 02:29:31.035922 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-05 02:29:31.035928 | orchestrator | Sunday 05 April 2026 02:29:23 +0000 (0:00:00.659) 0:02:10.457 ********** 2026-04-05 02:29:31.035938 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:29:31.035943 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:29:31.035963 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:29:31.035969 | orchestrator | 2026-04-05 02:29:31.035976 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-05 02:29:31.035983 | orchestrator | Sunday 05 April 2026 02:29:23 +0000 (0:00:00.918) 0:02:11.376 ********** 2026-04-05 02:29:31.035989 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 02:29:31.035996 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 02:29:31.036002 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 02:29:31.036008 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 02:29:31.036014 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 02:29:31.036020 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 02:29:31.036027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 02:29:31.036032 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 02:29:31.036037 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 02:29:31.036042 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-05 02:29:31.036047 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 02:29:31.036051 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 02:29:31.036055 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-05 02:29:31.036060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 02:29:31.036064 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 02:29:31.036069 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 02:29:31.036074 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 02:29:31.036078 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 02:29:31.036083 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 02:29:31.036088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 02:29:31.036093 | orchestrator | 2026-04-05 02:29:31.036097 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-05 02:29:31.036102 | orchestrator | 2026-04-05 02:29:31.036107 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-05 02:29:31.036110 | orchestrator | Sunday 05 April 2026 02:29:27 +0000 (0:00:03.084) 0:02:14.460 ********** 2026-04-05 02:29:31.036114 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:29:31.036118 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:29:31.036122 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:29:31.036125 | orchestrator | 2026-04-05 02:29:31.036140 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-05 02:29:31.036144 | orchestrator | Sunday 05 April 2026 02:29:27 +0000 (0:00:00.400) 0:02:14.861 ********** 2026-04-05 02:29:31.036148 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:29:31.036151 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:29:31.036155 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:29:31.036162 | orchestrator | 2026-04-05 02:29:31.036166 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-05 02:29:31.036170 | orchestrator | Sunday 05 April 2026 02:29:29 +0000 (0:00:01.595) 0:02:16.457 ********** 2026-04-05 02:29:31.036173 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:29:31.036177 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:29:31.036181 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:29:31.036184 | orchestrator | 2026-04-05 02:29:31.036188 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-05 02:29:31.036192 | orchestrator | Sunday 05 April 2026 02:29:29 +0000 (0:00:00.363) 0:02:16.820 ********** 2026-04-05 02:29:31.036196 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:29:31.036200 | orchestrator | 2026-04-05 02:29:31.036203 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-05 02:29:31.036207 | orchestrator | Sunday 05 April 2026 02:29:29 +0000 (0:00:00.522) 0:02:17.343 ********** 2026-04-05 02:29:31.036211 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:29:31.036215 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:29:31.036218 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:29:31.036222 | orchestrator | 2026-04-05 02:29:31.036226 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-05 02:29:31.036229 | orchestrator | Sunday 05 April 2026 02:29:30 +0000 (0:00:00.559) 0:02:17.902 ********** 2026-04-05 02:29:31.036233 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:29:31.036237 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:29:31.036240 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:29:31.036244 | orchestrator | 2026-04-05 02:29:31.036248 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-05 02:29:31.036254 | orchestrator | Sunday 05 April 2026 02:29:30 +0000 (0:00:00.332) 0:02:18.235 ********** 2026-04-05 02:29:31.036264 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:31:13.952306 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:31:13.952452 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:31:13.952471 | orchestrator | 2026-04-05 02:31:13.952484 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-05 02:31:13.952497 | orchestrator | Sunday 05 April 2026 02:29:31 +0000 (0:00:00.331) 0:02:18.567 ********** 2026-04-05 02:31:13.952508 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:31:13.952520 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:31:13.952531 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:31:13.952542 | orchestrator | 2026-04-05 02:31:13.952553 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-05 02:31:13.952564 | orchestrator | Sunday 05 April 2026 02:29:31 +0000 (0:00:00.659) 0:02:19.226 ********** 2026-04-05 02:31:13.952575 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:31:13.952586 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:31:13.952597 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:31:13.952608 | orchestrator | 2026-04-05 02:31:13.952671 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-05 02:31:13.952685 | orchestrator | Sunday 05 April 2026 02:29:33 +0000 (0:00:01.554) 0:02:20.781 ********** 2026-04-05 02:31:13.952697 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:31:13.952708 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:31:13.952719 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:31:13.952730 | orchestrator | 2026-04-05 02:31:13.952741 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-05 02:31:13.952752 | orchestrator | Sunday 05 April 2026 02:29:34 +0000 (0:00:01.332) 0:02:22.113 ********** 2026-04-05 02:31:13.952764 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:31:13.952775 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:31:13.952786 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:31:13.952796 | orchestrator | 2026-04-05 02:31:13.952808 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 02:31:13.952842 | orchestrator | 2026-04-05 02:31:13.952856 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 02:31:13.952869 | orchestrator | Sunday 05 April 2026 02:29:44 +0000 (0:00:10.124) 0:02:32.238 ********** 2026-04-05 02:31:13.952883 | orchestrator | ok: [testbed-manager] 2026-04-05 02:31:13.952897 | orchestrator | 2026-04-05 02:31:13.952910 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 02:31:13.952921 | orchestrator | Sunday 05 April 2026 02:29:45 +0000 (0:00:00.827) 0:02:33.065 ********** 2026-04-05 02:31:13.952932 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.952943 | orchestrator | 2026-04-05 02:31:13.952955 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 02:31:13.952966 | orchestrator | Sunday 05 April 2026 02:29:46 +0000 (0:00:00.835) 0:02:33.901 ********** 2026-04-05 02:31:13.952977 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 02:31:13.952988 | orchestrator | 2026-04-05 02:31:13.952998 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 02:31:13.953010 | orchestrator | Sunday 05 April 2026 02:29:47 +0000 (0:00:00.570) 0:02:34.471 ********** 2026-04-05 02:31:13.953021 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.953031 | orchestrator | 2026-04-05 02:31:13.953042 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 02:31:13.953053 | orchestrator | Sunday 05 April 2026 02:29:48 +0000 (0:00:00.940) 0:02:35.412 ********** 2026-04-05 02:31:13.953064 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.953075 | orchestrator | 2026-04-05 02:31:13.953086 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 02:31:13.953096 | orchestrator | Sunday 05 April 2026 02:29:48 +0000 (0:00:00.662) 0:02:36.074 ********** 2026-04-05 02:31:13.953107 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 02:31:13.953118 | orchestrator | 2026-04-05 02:31:13.953129 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 02:31:13.953140 | orchestrator | Sunday 05 April 2026 02:29:50 +0000 (0:00:01.617) 0:02:37.692 ********** 2026-04-05 02:31:13.953151 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 02:31:13.953162 | orchestrator | 2026-04-05 02:31:13.953195 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 02:31:13.953206 | orchestrator | Sunday 05 April 2026 02:29:51 +0000 (0:00:00.902) 0:02:38.595 ********** 2026-04-05 02:31:13.953217 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.953228 | orchestrator | 2026-04-05 02:31:13.953239 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 02:31:13.953250 | orchestrator | Sunday 05 April 2026 02:29:51 +0000 (0:00:00.443) 0:02:39.038 ********** 2026-04-05 02:31:13.953261 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.953272 | orchestrator | 2026-04-05 02:31:13.953282 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-05 02:31:13.953293 | orchestrator | 2026-04-05 02:31:13.953304 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-05 02:31:13.953316 | orchestrator | Sunday 05 April 2026 02:29:52 +0000 (0:00:00.500) 0:02:39.539 ********** 2026-04-05 02:31:13.953327 | orchestrator | ok: [testbed-manager] 2026-04-05 02:31:13.953338 | orchestrator | 2026-04-05 02:31:13.953349 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-05 02:31:13.953360 | orchestrator | Sunday 05 April 2026 02:29:52 +0000 (0:00:00.494) 0:02:40.034 ********** 2026-04-05 02:31:13.953371 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 02:31:13.953383 | orchestrator | 2026-04-05 02:31:13.953394 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-05 02:31:13.953404 | orchestrator | Sunday 05 April 2026 02:29:52 +0000 (0:00:00.249) 0:02:40.283 ********** 2026-04-05 02:31:13.953415 | orchestrator | ok: [testbed-manager] 2026-04-05 02:31:13.953426 | orchestrator | 2026-04-05 02:31:13.953445 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-05 02:31:13.953456 | orchestrator | Sunday 05 April 2026 02:29:53 +0000 (0:00:00.911) 0:02:41.195 ********** 2026-04-05 02:31:13.953467 | orchestrator | ok: [testbed-manager] 2026-04-05 02:31:13.953478 | orchestrator | 2026-04-05 02:31:13.953507 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-05 02:31:13.953519 | orchestrator | Sunday 05 April 2026 02:29:55 +0000 (0:00:01.902) 0:02:43.097 ********** 2026-04-05 02:31:13.953530 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.953540 | orchestrator | 2026-04-05 02:31:13.953551 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-05 02:31:13.953563 | orchestrator | Sunday 05 April 2026 02:29:56 +0000 (0:00:00.888) 0:02:43.986 ********** 2026-04-05 02:31:13.953573 | orchestrator | ok: [testbed-manager] 2026-04-05 02:31:13.953584 | orchestrator | 2026-04-05 02:31:13.953595 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-05 02:31:13.953607 | orchestrator | Sunday 05 April 2026 02:29:57 +0000 (0:00:00.523) 0:02:44.510 ********** 2026-04-05 02:31:13.953642 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.953660 | orchestrator | 2026-04-05 02:31:13.953679 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-05 02:31:13.953698 | orchestrator | Sunday 05 April 2026 02:30:05 +0000 (0:00:08.551) 0:02:53.062 ********** 2026-04-05 02:31:13.953709 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:13.953720 | orchestrator | 2026-04-05 02:31:13.953731 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-05 02:31:13.953742 | orchestrator | Sunday 05 April 2026 02:30:19 +0000 (0:00:13.382) 0:03:06.444 ********** 2026-04-05 02:31:13.953753 | orchestrator | ok: [testbed-manager] 2026-04-05 02:31:13.953763 | orchestrator | 2026-04-05 02:31:13.953774 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-05 02:31:13.953785 | orchestrator | 2026-04-05 02:31:13.953796 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-05 02:31:13.953807 | orchestrator | Sunday 05 April 2026 02:30:19 +0000 (0:00:00.809) 0:03:07.254 ********** 2026-04-05 02:31:13.953818 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:31:13.953829 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:31:13.953840 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:31:13.953851 | orchestrator | 2026-04-05 02:31:13.953862 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-05 02:31:13.953873 | orchestrator | Sunday 05 April 2026 02:30:20 +0000 (0:00:00.316) 0:03:07.570 ********** 2026-04-05 02:31:13.953884 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:13.953895 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:31:13.953905 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:31:13.953916 | orchestrator | 2026-04-05 02:31:13.953927 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-05 02:31:13.953938 | orchestrator | Sunday 05 April 2026 02:30:20 +0000 (0:00:00.356) 0:03:07.927 ********** 2026-04-05 02:31:13.953949 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:31:13.953960 | orchestrator | 2026-04-05 02:31:13.953971 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-05 02:31:13.953982 | orchestrator | Sunday 05 April 2026 02:30:21 +0000 (0:00:00.749) 0:03:08.676 ********** 2026-04-05 02:31:13.953993 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 02:31:13.954004 | orchestrator | 2026-04-05 02:31:13.954015 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-05 02:31:13.954089 | orchestrator | Sunday 05 April 2026 02:30:22 +0000 (0:00:00.871) 0:03:09.548 ********** 2026-04-05 02:31:13.954100 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 02:31:13.954111 | orchestrator | 2026-04-05 02:31:13.954122 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-05 02:31:13.954142 | orchestrator | Sunday 05 April 2026 02:30:23 +0000 (0:00:00.938) 0:03:10.487 ********** 2026-04-05 02:31:13.954153 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:13.954163 | orchestrator | 2026-04-05 02:31:13.954174 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-05 02:31:13.954185 | orchestrator | Sunday 05 April 2026 02:30:23 +0000 (0:00:00.131) 0:03:10.618 ********** 2026-04-05 02:31:13.954196 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 02:31:13.954206 | orchestrator | 2026-04-05 02:31:13.954217 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-05 02:31:13.954228 | orchestrator | Sunday 05 April 2026 02:30:24 +0000 (0:00:01.050) 0:03:11.669 ********** 2026-04-05 02:31:13.954239 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:13.954250 | orchestrator | 2026-04-05 02:31:13.954260 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-05 02:31:13.954271 | orchestrator | Sunday 05 April 2026 02:30:24 +0000 (0:00:00.130) 0:03:11.800 ********** 2026-04-05 02:31:13.954281 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:13.954292 | orchestrator | 2026-04-05 02:31:13.954303 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-05 02:31:13.954314 | orchestrator | Sunday 05 April 2026 02:30:24 +0000 (0:00:00.138) 0:03:11.938 ********** 2026-04-05 02:31:13.954324 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:13.954335 | orchestrator | 2026-04-05 02:31:13.954346 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-05 02:31:13.954363 | orchestrator | Sunday 05 April 2026 02:30:24 +0000 (0:00:00.128) 0:03:12.067 ********** 2026-04-05 02:31:13.954374 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:13.954385 | orchestrator | 2026-04-05 02:31:13.954396 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-05 02:31:13.954407 | orchestrator | Sunday 05 April 2026 02:30:24 +0000 (0:00:00.125) 0:03:12.192 ********** 2026-04-05 02:31:13.954418 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 02:31:13.954428 | orchestrator | 2026-04-05 02:31:13.954439 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-05 02:31:13.954450 | orchestrator | Sunday 05 April 2026 02:30:31 +0000 (0:00:06.517) 0:03:18.710 ********** 2026-04-05 02:31:13.954461 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-05 02:31:13.954472 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-05 02:31:13.954492 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-05 02:31:38.478766 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-05 02:31:38.478878 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-05 02:31:38.478895 | orchestrator | 2026-04-05 02:31:38.478907 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-05 02:31:38.478919 | orchestrator | Sunday 05 April 2026 02:31:13 +0000 (0:00:42.606) 0:04:01.316 ********** 2026-04-05 02:31:38.478930 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 02:31:38.478942 | orchestrator | 2026-04-05 02:31:38.478953 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-05 02:31:38.478964 | orchestrator | Sunday 05 April 2026 02:31:15 +0000 (0:00:01.596) 0:04:02.913 ********** 2026-04-05 02:31:38.478975 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 02:31:38.478986 | orchestrator | 2026-04-05 02:31:38.478997 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-05 02:31:38.479008 | orchestrator | Sunday 05 April 2026 02:31:17 +0000 (0:00:01.976) 0:04:04.889 ********** 2026-04-05 02:31:38.479019 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 02:31:38.479030 | orchestrator | 2026-04-05 02:31:38.479041 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-05 02:31:38.479052 | orchestrator | Sunday 05 April 2026 02:31:18 +0000 (0:00:01.359) 0:04:06.248 ********** 2026-04-05 02:31:38.479087 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:38.479099 | orchestrator | 2026-04-05 02:31:38.479110 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-05 02:31:38.479121 | orchestrator | Sunday 05 April 2026 02:31:18 +0000 (0:00:00.138) 0:04:06.386 ********** 2026-04-05 02:31:38.479131 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-05 02:31:38.479143 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-05 02:31:38.479154 | orchestrator | 2026-04-05 02:31:38.479164 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-05 02:31:38.479175 | orchestrator | Sunday 05 April 2026 02:31:21 +0000 (0:00:02.191) 0:04:08.578 ********** 2026-04-05 02:31:38.479186 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:38.479197 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:31:38.479208 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:31:38.479218 | orchestrator | 2026-04-05 02:31:38.479234 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-05 02:31:38.479252 | orchestrator | Sunday 05 April 2026 02:31:21 +0000 (0:00:00.304) 0:04:08.882 ********** 2026-04-05 02:31:38.479271 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:31:38.479290 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:31:38.479309 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:31:38.479327 | orchestrator | 2026-04-05 02:31:38.479341 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-05 02:31:38.479353 | orchestrator | 2026-04-05 02:31:38.479381 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-05 02:31:38.479394 | orchestrator | Sunday 05 April 2026 02:31:22 +0000 (0:00:00.873) 0:04:09.756 ********** 2026-04-05 02:31:38.479407 | orchestrator | ok: [testbed-manager] 2026-04-05 02:31:38.479420 | orchestrator | 2026-04-05 02:31:38.479431 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-05 02:31:38.479442 | orchestrator | Sunday 05 April 2026 02:31:22 +0000 (0:00:00.368) 0:04:10.124 ********** 2026-04-05 02:31:38.479453 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 02:31:38.479464 | orchestrator | 2026-04-05 02:31:38.479474 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-05 02:31:38.479485 | orchestrator | Sunday 05 April 2026 02:31:22 +0000 (0:00:00.248) 0:04:10.373 ********** 2026-04-05 02:31:38.479496 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:38.479507 | orchestrator | 2026-04-05 02:31:38.479517 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-05 02:31:38.479528 | orchestrator | 2026-04-05 02:31:38.479539 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-05 02:31:38.479550 | orchestrator | Sunday 05 April 2026 02:31:28 +0000 (0:00:05.441) 0:04:15.814 ********** 2026-04-05 02:31:38.479561 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:31:38.479571 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:31:38.479582 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:31:38.479592 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:31:38.479603 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:31:38.479614 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:31:38.479624 | orchestrator | 2026-04-05 02:31:38.479664 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-05 02:31:38.479676 | orchestrator | Sunday 05 April 2026 02:31:29 +0000 (0:00:00.633) 0:04:16.447 ********** 2026-04-05 02:31:38.479687 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 02:31:38.479698 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 02:31:38.479708 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 02:31:38.479719 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 02:31:38.479740 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 02:31:38.479751 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 02:31:38.479761 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 02:31:38.479772 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 02:31:38.479783 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 02:31:38.479812 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 02:31:38.479823 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 02:31:38.479835 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 02:31:38.479846 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 02:31:38.479856 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 02:31:38.479867 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 02:31:38.479895 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 02:31:38.479906 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 02:31:38.479916 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 02:31:38.479927 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 02:31:38.479938 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 02:31:38.479948 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 02:31:38.479959 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 02:31:38.479970 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 02:31:38.479981 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 02:31:38.479991 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 02:31:38.480002 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 02:31:38.480012 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 02:31:38.480023 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 02:31:38.480034 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 02:31:38.480045 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 02:31:38.480055 | orchestrator | 2026-04-05 02:31:38.480066 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-05 02:31:38.480077 | orchestrator | Sunday 05 April 2026 02:31:37 +0000 (0:00:08.047) 0:04:24.495 ********** 2026-04-05 02:31:38.480088 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:31:38.480099 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:31:38.480109 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:31:38.480120 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:38.480131 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:31:38.480141 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:31:38.480152 | orchestrator | 2026-04-05 02:31:38.480162 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-05 02:31:38.480173 | orchestrator | Sunday 05 April 2026 02:31:37 +0000 (0:00:00.712) 0:04:25.207 ********** 2026-04-05 02:31:38.480184 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:31:38.480204 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:31:38.480214 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:31:38.480225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:31:38.480236 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:31:38.480246 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:31:38.480257 | orchestrator | 2026-04-05 02:31:38.480268 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:31:38.480279 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:31:38.480293 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 02:31:38.480304 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 02:31:38.480315 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 02:31:38.480326 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 02:31:38.480336 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 02:31:38.480347 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 02:31:38.480358 | orchestrator | 2026-04-05 02:31:38.480369 | orchestrator | 2026-04-05 02:31:38.480380 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:31:38.480391 | orchestrator | Sunday 05 April 2026 02:31:38 +0000 (0:00:00.640) 0:04:25.848 ********** 2026-04-05 02:31:38.480409 | orchestrator | =============================================================================== 2026-04-05 02:31:38.918589 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.87s 2026-04-05 02:31:38.918736 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.61s 2026-04-05 02:31:38.918747 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.92s 2026-04-05 02:31:38.918754 | orchestrator | kubectl : Install required packages ------------------------------------ 13.38s 2026-04-05 02:31:38.918760 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.12s 2026-04-05 02:31:38.918767 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.55s 2026-04-05 02:31:38.918775 | orchestrator | Manage labels ----------------------------------------------------------- 8.05s 2026-04-05 02:31:38.918782 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.52s 2026-04-05 02:31:38.918789 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.44s 2026-04-05 02:31:38.918798 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.73s 2026-04-05 02:31:38.918805 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.08s 2026-04-05 02:31:38.918814 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.92s 2026-04-05 02:31:38.918821 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.19s 2026-04-05 02:31:38.918828 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.09s 2026-04-05 02:31:38.918835 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.98s 2026-04-05 02:31:38.918842 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.90s 2026-04-05 02:31:38.918849 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.89s 2026-04-05 02:31:38.918878 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.88s 2026-04-05 02:31:38.918886 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.74s 2026-04-05 02:31:38.918893 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.72s 2026-04-05 02:31:39.263044 | orchestrator | + osism apply copy-kubeconfig 2026-04-05 02:31:51.476304 | orchestrator | 2026-04-05 02:31:51 | INFO  | Task 6961ccf8-44ee-4fa0-95d4-fe63ed0552b4 (copy-kubeconfig) was prepared for execution. 2026-04-05 02:31:51.476402 | orchestrator | 2026-04-05 02:31:51 | INFO  | It takes a moment until task 6961ccf8-44ee-4fa0-95d4-fe63ed0552b4 (copy-kubeconfig) has been started and output is visible here. 2026-04-05 02:31:59.265135 | orchestrator | 2026-04-05 02:31:59.265216 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-05 02:31:59.265224 | orchestrator | 2026-04-05 02:31:59.265228 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 02:31:59.265233 | orchestrator | Sunday 05 April 2026 02:31:56 +0000 (0:00:00.192) 0:00:00.192 ********** 2026-04-05 02:31:59.265238 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 02:31:59.265242 | orchestrator | 2026-04-05 02:31:59.265246 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 02:31:59.265250 | orchestrator | Sunday 05 April 2026 02:31:57 +0000 (0:00:00.833) 0:00:01.025 ********** 2026-04-05 02:31:59.265270 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:59.265275 | orchestrator | 2026-04-05 02:31:59.265279 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-05 02:31:59.265283 | orchestrator | Sunday 05 April 2026 02:31:58 +0000 (0:00:01.379) 0:00:02.405 ********** 2026-04-05 02:31:59.265289 | orchestrator | changed: [testbed-manager] 2026-04-05 02:31:59.265293 | orchestrator | 2026-04-05 02:31:59.265300 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:31:59.265305 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:31:59.265310 | orchestrator | 2026-04-05 02:31:59.265314 | orchestrator | 2026-04-05 02:31:59.265318 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:31:59.265322 | orchestrator | Sunday 05 April 2026 02:31:58 +0000 (0:00:00.527) 0:00:02.933 ********** 2026-04-05 02:31:59.265326 | orchestrator | =============================================================================== 2026-04-05 02:31:59.265330 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.38s 2026-04-05 02:31:59.265334 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2026-04-05 02:31:59.265337 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2026-04-05 02:31:59.651727 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-04-05 02:32:11.864470 | orchestrator | 2026-04-05 02:32:11 | INFO  | Task 2d2c65d3-809b-4f71-a0c8-e525ca4e5e53 (openstackclient) was prepared for execution. 2026-04-05 02:32:11.864556 | orchestrator | 2026-04-05 02:32:11 | INFO  | It takes a moment until task 2d2c65d3-809b-4f71-a0c8-e525ca4e5e53 (openstackclient) has been started and output is visible here. 2026-04-05 02:33:01.605283 | orchestrator | 2026-04-05 02:33:01.605398 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-05 02:33:01.605417 | orchestrator | 2026-04-05 02:33:01.605429 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-05 02:33:01.605443 | orchestrator | Sunday 05 April 2026 02:32:16 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-04-05 02:33:01.605457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-05 02:33:01.605470 | orchestrator | 2026-04-05 02:33:01.605510 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-05 02:33:01.605523 | orchestrator | Sunday 05 April 2026 02:32:16 +0000 (0:00:00.227) 0:00:00.479 ********** 2026-04-05 02:33:01.605534 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-05 02:33:01.605547 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-05 02:33:01.605559 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-05 02:33:01.605571 | orchestrator | 2026-04-05 02:33:01.605582 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-05 02:33:01.605594 | orchestrator | Sunday 05 April 2026 02:32:17 +0000 (0:00:01.261) 0:00:01.740 ********** 2026-04-05 02:33:01.605605 | orchestrator | changed: [testbed-manager] 2026-04-05 02:33:01.605616 | orchestrator | 2026-04-05 02:33:01.605626 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-05 02:33:01.605636 | orchestrator | Sunday 05 April 2026 02:32:19 +0000 (0:00:01.568) 0:00:03.308 ********** 2026-04-05 02:33:01.605695 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-05 02:33:01.605712 | orchestrator | ok: [testbed-manager] 2026-04-05 02:33:01.605725 | orchestrator | 2026-04-05 02:33:01.605736 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-05 02:33:01.605748 | orchestrator | Sunday 05 April 2026 02:32:55 +0000 (0:00:36.389) 0:00:39.698 ********** 2026-04-05 02:33:01.605759 | orchestrator | changed: [testbed-manager] 2026-04-05 02:33:01.605770 | orchestrator | 2026-04-05 02:33:01.605781 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-05 02:33:01.605792 | orchestrator | Sunday 05 April 2026 02:32:56 +0000 (0:00:00.914) 0:00:40.613 ********** 2026-04-05 02:33:01.605804 | orchestrator | ok: [testbed-manager] 2026-04-05 02:33:01.605815 | orchestrator | 2026-04-05 02:33:01.605826 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-05 02:33:01.605837 | orchestrator | Sunday 05 April 2026 02:32:57 +0000 (0:00:00.645) 0:00:41.258 ********** 2026-04-05 02:33:01.605848 | orchestrator | changed: [testbed-manager] 2026-04-05 02:33:01.605859 | orchestrator | 2026-04-05 02:33:01.605872 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-05 02:33:01.605885 | orchestrator | Sunday 05 April 2026 02:32:59 +0000 (0:00:01.893) 0:00:43.152 ********** 2026-04-05 02:33:01.605897 | orchestrator | changed: [testbed-manager] 2026-04-05 02:33:01.605909 | orchestrator | 2026-04-05 02:33:01.605921 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-05 02:33:01.605933 | orchestrator | Sunday 05 April 2026 02:33:00 +0000 (0:00:00.763) 0:00:43.916 ********** 2026-04-05 02:33:01.605945 | orchestrator | changed: [testbed-manager] 2026-04-05 02:33:01.605956 | orchestrator | 2026-04-05 02:33:01.605968 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-05 02:33:01.605979 | orchestrator | Sunday 05 April 2026 02:33:00 +0000 (0:00:00.622) 0:00:44.539 ********** 2026-04-05 02:33:01.605991 | orchestrator | ok: [testbed-manager] 2026-04-05 02:33:01.606002 | orchestrator | 2026-04-05 02:33:01.606014 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:33:01.606080 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:33:01.606092 | orchestrator | 2026-04-05 02:33:01.606104 | orchestrator | 2026-04-05 02:33:01.606115 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:33:01.606126 | orchestrator | Sunday 05 April 2026 02:33:01 +0000 (0:00:00.429) 0:00:44.968 ********** 2026-04-05 02:33:01.606138 | orchestrator | =============================================================================== 2026-04-05 02:33:01.606149 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.39s 2026-04-05 02:33:01.606161 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.89s 2026-04-05 02:33:01.606183 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.57s 2026-04-05 02:33:01.606194 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.26s 2026-04-05 02:33:01.606205 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.91s 2026-04-05 02:33:01.606214 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.76s 2026-04-05 02:33:01.606225 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.65s 2026-04-05 02:33:01.606236 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2026-04-05 02:33:01.606247 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2026-04-05 02:33:01.606258 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2026-04-05 02:33:04.083696 | orchestrator | 2026-04-05 02:33:04 | INFO  | Task ef58ba02-92b8-4289-9862-4be4e6f51a08 (common) was prepared for execution. 2026-04-05 02:33:04.083793 | orchestrator | 2026-04-05 02:33:04 | INFO  | It takes a moment until task ef58ba02-92b8-4289-9862-4be4e6f51a08 (common) has been started and output is visible here. 2026-04-05 02:33:16.853007 | orchestrator | 2026-04-05 02:33:16.853102 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-05 02:33:16.853114 | orchestrator | 2026-04-05 02:33:16.853122 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 02:33:16.853131 | orchestrator | Sunday 05 April 2026 02:33:08 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-04-05 02:33:16.853139 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:33:16.853148 | orchestrator | 2026-04-05 02:33:16.853156 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-05 02:33:16.853163 | orchestrator | Sunday 05 April 2026 02:33:09 +0000 (0:00:01.393) 0:00:01.697 ********** 2026-04-05 02:33:16.853171 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 02:33:16.853178 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 02:33:16.853186 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 02:33:16.853193 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 02:33:16.853200 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 02:33:16.853208 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 02:33:16.853215 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 02:33:16.853222 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 02:33:16.853229 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 02:33:16.853253 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 02:33:16.853261 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 02:33:16.853268 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 02:33:16.853276 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 02:33:16.853283 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 02:33:16.853290 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 02:33:16.853297 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 02:33:16.853305 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 02:33:16.853330 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 02:33:16.853338 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 02:33:16.853345 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 02:33:16.853352 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 02:33:16.853360 | orchestrator | 2026-04-05 02:33:16.853367 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 02:33:16.853374 | orchestrator | Sunday 05 April 2026 02:33:12 +0000 (0:00:02.893) 0:00:04.590 ********** 2026-04-05 02:33:16.853382 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:33:16.853390 | orchestrator | 2026-04-05 02:33:16.853397 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-05 02:33:16.853409 | orchestrator | Sunday 05 April 2026 02:33:14 +0000 (0:00:01.339) 0:00:05.929 ********** 2026-04-05 02:33:16.853418 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:16.853429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:16.853455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:16.853464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:16.853472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:16.853479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:16.853493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:16.853501 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:16.853508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:16.853530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147455 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:18.147617 | orchestrator | 2026-04-05 02:33:18.147632 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-05 02:33:18.147673 | orchestrator | Sunday 05 April 2026 02:33:17 +0000 (0:00:03.757) 0:00:09.687 ********** 2026-04-05 02:33:18.147692 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:18.147707 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.147717 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.147725 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:33:18.147735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:18.147756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748344 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:33:18.748432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:18.748459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748499 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:33:18.748520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:18.748546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748585 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:33:18.748630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:18.748694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748738 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:33:18.748758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:18.748779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:18.748819 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:33:18.748840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:18.748871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625437 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:33:19.625455 | orchestrator | 2026-04-05 02:33:19.625468 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-05 02:33:19.625480 | orchestrator | Sunday 05 April 2026 02:33:18 +0000 (0:00:00.911) 0:00:10.599 ********** 2026-04-05 02:33:19.625493 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:19.625513 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625532 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625549 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:33:19.625591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:19.625620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625721 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:33:19.625762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:19.625775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625798 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:33:19.625809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:19.625820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:19.625855 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:33:19.625869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:19.625901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:24.739871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:24.739966 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:33:24.739982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:24.739995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:24.740007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:24.740017 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:33:24.740027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 02:33:24.740061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:24.740071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:24.740081 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:33:24.740091 | orchestrator | 2026-04-05 02:33:24.740102 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-05 02:33:24.740113 | orchestrator | Sunday 05 April 2026 02:33:20 +0000 (0:00:01.831) 0:00:12.431 ********** 2026-04-05 02:33:24.740123 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:33:24.740132 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:33:24.740143 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:33:24.740152 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:33:24.740177 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:33:24.740187 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:33:24.740197 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:33:24.740206 | orchestrator | 2026-04-05 02:33:24.740216 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-05 02:33:24.740225 | orchestrator | Sunday 05 April 2026 02:33:21 +0000 (0:00:00.719) 0:00:13.151 ********** 2026-04-05 02:33:24.740235 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:33:24.740245 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:33:24.740254 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:33:24.740264 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:33:24.740274 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:33:24.740283 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:33:24.740293 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:33:24.740303 | orchestrator | 2026-04-05 02:33:24.740312 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-05 02:33:24.740322 | orchestrator | Sunday 05 April 2026 02:33:22 +0000 (0:00:00.881) 0:00:14.033 ********** 2026-04-05 02:33:24.740333 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:24.740359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:24.740377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:24.740392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:24.740403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:24.740414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:24.740437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:27.616529 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616825 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:27.616944 | orchestrator | 2026-04-05 02:33:27.616956 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-05 02:33:27.616968 | orchestrator | Sunday 05 April 2026 02:33:25 +0000 (0:00:03.445) 0:00:17.478 ********** 2026-04-05 02:33:27.616977 | orchestrator | [WARNING]: Skipped 2026-04-05 02:33:27.616989 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-05 02:33:27.617001 | orchestrator | to this access issue: 2026-04-05 02:33:27.617011 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-05 02:33:27.617021 | orchestrator | directory 2026-04-05 02:33:27.617031 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 02:33:27.617042 | orchestrator | 2026-04-05 02:33:27.617052 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-05 02:33:27.617062 | orchestrator | Sunday 05 April 2026 02:33:26 +0000 (0:00:01.039) 0:00:18.518 ********** 2026-04-05 02:33:27.617074 | orchestrator | [WARNING]: Skipped 2026-04-05 02:33:27.617092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-05 02:33:37.994822 | orchestrator | to this access issue: 2026-04-05 02:33:37.994935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-05 02:33:37.994959 | orchestrator | directory 2026-04-05 02:33:37.994977 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 02:33:37.994995 | orchestrator | 2026-04-05 02:33:37.995012 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-05 02:33:37.995030 | orchestrator | Sunday 05 April 2026 02:33:27 +0000 (0:00:01.254) 0:00:19.773 ********** 2026-04-05 02:33:37.995074 | orchestrator | [WARNING]: Skipped 2026-04-05 02:33:37.995091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-05 02:33:37.995107 | orchestrator | to this access issue: 2026-04-05 02:33:37.995124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-05 02:33:37.995140 | orchestrator | directory 2026-04-05 02:33:37.995156 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 02:33:37.995172 | orchestrator | 2026-04-05 02:33:37.995189 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-05 02:33:37.995206 | orchestrator | Sunday 05 April 2026 02:33:28 +0000 (0:00:00.901) 0:00:20.674 ********** 2026-04-05 02:33:37.995222 | orchestrator | [WARNING]: Skipped 2026-04-05 02:33:37.995241 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-05 02:33:37.995252 | orchestrator | to this access issue: 2026-04-05 02:33:37.995261 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-05 02:33:37.995271 | orchestrator | directory 2026-04-05 02:33:37.995280 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 02:33:37.995289 | orchestrator | 2026-04-05 02:33:37.995299 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-05 02:33:37.995308 | orchestrator | Sunday 05 April 2026 02:33:29 +0000 (0:00:00.882) 0:00:21.557 ********** 2026-04-05 02:33:37.995318 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:33:37.995327 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:33:37.995337 | orchestrator | changed: [testbed-manager] 2026-04-05 02:33:37.995348 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:33:37.995360 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:33:37.995371 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:33:37.995400 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:33:37.995412 | orchestrator | 2026-04-05 02:33:37.995423 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-05 02:33:37.995434 | orchestrator | Sunday 05 April 2026 02:33:32 +0000 (0:00:02.614) 0:00:24.172 ********** 2026-04-05 02:33:37.995446 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 02:33:37.995459 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 02:33:37.995470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 02:33:37.995481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 02:33:37.995491 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 02:33:37.995507 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 02:33:37.995519 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 02:33:37.995530 | orchestrator | 2026-04-05 02:33:37.995541 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-05 02:33:37.995553 | orchestrator | Sunday 05 April 2026 02:33:34 +0000 (0:00:02.256) 0:00:26.428 ********** 2026-04-05 02:33:37.995565 | orchestrator | changed: [testbed-manager] 2026-04-05 02:33:37.995576 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:33:37.995587 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:33:37.995598 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:33:37.995610 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:33:37.995620 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:33:37.995636 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:33:37.995652 | orchestrator | 2026-04-05 02:33:37.995761 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-05 02:33:37.995793 | orchestrator | Sunday 05 April 2026 02:33:36 +0000 (0:00:01.997) 0:00:28.425 ********** 2026-04-05 02:33:37.995815 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:37.995860 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:37.995879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:37.995891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:37.995901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:37.995917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:37.995928 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:37.995960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:37.995991 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:37.996013 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.002638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:44.002904 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.002938 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.002980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:44.003033 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.003059 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.003080 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.003129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:33:44.003152 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.003258 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.003283 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.003305 | orchestrator | 2026-04-05 02:33:44.003326 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-05 02:33:44.003347 | orchestrator | Sunday 05 April 2026 02:33:38 +0000 (0:00:01.619) 0:00:30.045 ********** 2026-04-05 02:33:44.003368 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 02:33:44.003388 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 02:33:44.003423 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 02:33:44.003444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 02:33:44.003464 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 02:33:44.003484 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 02:33:44.003502 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 02:33:44.003521 | orchestrator | 2026-04-05 02:33:44.003541 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-05 02:33:44.003560 | orchestrator | Sunday 05 April 2026 02:33:40 +0000 (0:00:01.948) 0:00:31.994 ********** 2026-04-05 02:33:44.003579 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 02:33:44.003600 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 02:33:44.003618 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 02:33:44.003650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 02:33:44.003703 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 02:33:44.003722 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 02:33:44.003742 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 02:33:44.003760 | orchestrator | 2026-04-05 02:33:44.003778 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-05 02:33:44.003796 | orchestrator | Sunday 05 April 2026 02:33:41 +0000 (0:00:01.738) 0:00:33.732 ********** 2026-04-05 02:33:44.003815 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.003851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.633576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.633756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.633802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.633830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.633843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.633855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 02:33:44.633867 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.633898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.633917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.633953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.633978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.634006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.634102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.634124 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:33:44.634158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:35:11.327125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:35:11.327226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:35:11.327236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:35:11.327254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:35:11.327261 | orchestrator | 2026-04-05 02:35:11.327268 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-05 02:35:11.327276 | orchestrator | Sunday 05 April 2026 02:33:44 +0000 (0:00:02.753) 0:00:36.486 ********** 2026-04-05 02:35:11.327282 | orchestrator | changed: [testbed-manager] 2026-04-05 02:35:11.327289 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:35:11.327295 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:35:11.327301 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:35:11.327307 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:35:11.327313 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:35:11.327319 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:35:11.327324 | orchestrator | 2026-04-05 02:35:11.327330 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-05 02:35:11.327336 | orchestrator | Sunday 05 April 2026 02:33:46 +0000 (0:00:01.380) 0:00:37.867 ********** 2026-04-05 02:35:11.327342 | orchestrator | changed: [testbed-manager] 2026-04-05 02:35:11.327348 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:35:11.327354 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:35:11.327360 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:35:11.327365 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:35:11.327371 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:35:11.327376 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:35:11.327382 | orchestrator | 2026-04-05 02:35:11.327388 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 02:35:11.327394 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:01.081) 0:00:38.948 ********** 2026-04-05 02:35:11.327399 | orchestrator | 2026-04-05 02:35:11.327405 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 02:35:11.327411 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:00.067) 0:00:39.015 ********** 2026-04-05 02:35:11.327417 | orchestrator | 2026-04-05 02:35:11.327422 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 02:35:11.327428 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:00.080) 0:00:39.096 ********** 2026-04-05 02:35:11.327434 | orchestrator | 2026-04-05 02:35:11.327440 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 02:35:11.327446 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:00.065) 0:00:39.162 ********** 2026-04-05 02:35:11.327451 | orchestrator | 2026-04-05 02:35:11.327457 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 02:35:11.327468 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:00.233) 0:00:39.395 ********** 2026-04-05 02:35:11.327473 | orchestrator | 2026-04-05 02:35:11.327479 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 02:35:11.327485 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:00.064) 0:00:39.459 ********** 2026-04-05 02:35:11.327491 | orchestrator | 2026-04-05 02:35:11.327497 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 02:35:11.327503 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:00.060) 0:00:39.519 ********** 2026-04-05 02:35:11.327508 | orchestrator | 2026-04-05 02:35:11.327514 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-05 02:35:11.327520 | orchestrator | Sunday 05 April 2026 02:33:47 +0000 (0:00:00.090) 0:00:39.610 ********** 2026-04-05 02:35:11.327526 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:35:11.327531 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:35:11.327537 | orchestrator | changed: [testbed-manager] 2026-04-05 02:35:11.327543 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:35:11.327549 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:35:11.327565 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:35:11.327572 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:35:11.327577 | orchestrator | 2026-04-05 02:35:11.327583 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-05 02:35:11.327589 | orchestrator | Sunday 05 April 2026 02:34:24 +0000 (0:00:37.143) 0:01:16.754 ********** 2026-04-05 02:35:11.327595 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:35:11.327601 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:35:11.327606 | orchestrator | changed: [testbed-manager] 2026-04-05 02:35:11.327612 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:35:11.327618 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:35:11.327623 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:35:11.327629 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:35:11.327635 | orchestrator | 2026-04-05 02:35:11.327641 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-05 02:35:11.327646 | orchestrator | Sunday 05 April 2026 02:35:00 +0000 (0:00:35.680) 0:01:52.434 ********** 2026-04-05 02:35:11.327652 | orchestrator | ok: [testbed-manager] 2026-04-05 02:35:11.327659 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:35:11.327665 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:35:11.327671 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:35:11.327676 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:35:11.327720 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:35:11.327727 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:35:11.327734 | orchestrator | 2026-04-05 02:35:11.327741 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-05 02:35:11.327748 | orchestrator | Sunday 05 April 2026 02:35:02 +0000 (0:00:01.969) 0:01:54.403 ********** 2026-04-05 02:35:11.327755 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:35:11.327762 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:35:11.327769 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:35:11.327776 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:35:11.327782 | orchestrator | changed: [testbed-manager] 2026-04-05 02:35:11.327789 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:35:11.327795 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:35:11.327801 | orchestrator | 2026-04-05 02:35:11.327808 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:35:11.327817 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 02:35:11.327825 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 02:35:11.327837 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 02:35:11.327850 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 02:35:11.327857 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 02:35:11.327864 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 02:35:11.327871 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 02:35:11.327877 | orchestrator | 2026-04-05 02:35:11.327884 | orchestrator | 2026-04-05 02:35:11.327891 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:35:11.327898 | orchestrator | Sunday 05 April 2026 02:35:11 +0000 (0:00:08.762) 0:02:03.166 ********** 2026-04-05 02:35:11.327905 | orchestrator | =============================================================================== 2026-04-05 02:35:11.327912 | orchestrator | common : Restart fluentd container ------------------------------------- 37.14s 2026-04-05 02:35:11.327919 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.68s 2026-04-05 02:35:11.327925 | orchestrator | common : Restart cron container ----------------------------------------- 8.76s 2026-04-05 02:35:11.327932 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.76s 2026-04-05 02:35:11.327939 | orchestrator | common : Copying over config.json files for services -------------------- 3.45s 2026-04-05 02:35:11.327945 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.89s 2026-04-05 02:35:11.327953 | orchestrator | common : Check common containers ---------------------------------------- 2.75s 2026-04-05 02:35:11.327959 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.61s 2026-04-05 02:35:11.327966 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.26s 2026-04-05 02:35:11.327973 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.00s 2026-04-05 02:35:11.327980 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.97s 2026-04-05 02:35:11.327987 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.95s 2026-04-05 02:35:11.327994 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.83s 2026-04-05 02:35:11.328000 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.74s 2026-04-05 02:35:11.328006 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.62s 2026-04-05 02:35:11.328011 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-04-05 02:35:11.328022 | orchestrator | common : Creating log volume -------------------------------------------- 1.38s 2026-04-05 02:35:11.757919 | orchestrator | common : include_tasks -------------------------------------------------- 1.34s 2026-04-05 02:35:11.758078 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.25s 2026-04-05 02:35:11.758103 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.08s 2026-04-05 02:35:14.177028 | orchestrator | 2026-04-05 02:35:14 | INFO  | Task 7716a960-1a43-480c-b9dd-49c6b2baeb4a (loadbalancer) was prepared for execution. 2026-04-05 02:35:14.177137 | orchestrator | 2026-04-05 02:35:14 | INFO  | It takes a moment until task 7716a960-1a43-480c-b9dd-49c6b2baeb4a (loadbalancer) has been started and output is visible here. 2026-04-05 02:35:29.400857 | orchestrator | 2026-04-05 02:35:29.400947 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:35:29.400959 | orchestrator | 2026-04-05 02:35:29.400966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:35:29.400974 | orchestrator | Sunday 05 April 2026 02:35:18 +0000 (0:00:00.267) 0:00:00.267 ********** 2026-04-05 02:35:29.401001 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:35:29.401009 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:35:29.401016 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:35:29.401023 | orchestrator | 2026-04-05 02:35:29.401030 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:35:29.401037 | orchestrator | Sunday 05 April 2026 02:35:18 +0000 (0:00:00.322) 0:00:00.589 ********** 2026-04-05 02:35:29.401044 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-05 02:35:29.401051 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-05 02:35:29.401058 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-05 02:35:29.401065 | orchestrator | 2026-04-05 02:35:29.401072 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-05 02:35:29.401078 | orchestrator | 2026-04-05 02:35:29.401085 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 02:35:29.401103 | orchestrator | Sunday 05 April 2026 02:35:19 +0000 (0:00:00.441) 0:00:01.031 ********** 2026-04-05 02:35:29.401110 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:35:29.401118 | orchestrator | 2026-04-05 02:35:29.401124 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-05 02:35:29.401131 | orchestrator | Sunday 05 April 2026 02:35:19 +0000 (0:00:00.542) 0:00:01.574 ********** 2026-04-05 02:35:29.401138 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:35:29.401144 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:35:29.401151 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:35:29.401157 | orchestrator | 2026-04-05 02:35:29.401164 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 02:35:29.401171 | orchestrator | Sunday 05 April 2026 02:35:20 +0000 (0:00:00.649) 0:00:02.223 ********** 2026-04-05 02:35:29.401177 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:35:29.401184 | orchestrator | 2026-04-05 02:35:29.401190 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-05 02:35:29.401197 | orchestrator | Sunday 05 April 2026 02:35:21 +0000 (0:00:00.688) 0:00:02.912 ********** 2026-04-05 02:35:29.401204 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:35:29.401210 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:35:29.401217 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:35:29.401224 | orchestrator | 2026-04-05 02:35:29.401230 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-05 02:35:29.401237 | orchestrator | Sunday 05 April 2026 02:35:21 +0000 (0:00:00.609) 0:00:03.521 ********** 2026-04-05 02:35:29.401243 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 02:35:29.401250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 02:35:29.401257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 02:35:29.401263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 02:35:29.401270 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 02:35:29.401277 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 02:35:29.401284 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 02:35:29.401291 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 02:35:29.401297 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 02:35:29.401304 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 02:35:29.401315 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 02:35:29.401322 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 02:35:29.401329 | orchestrator | 2026-04-05 02:35:29.401335 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 02:35:29.401342 | orchestrator | Sunday 05 April 2026 02:35:24 +0000 (0:00:03.111) 0:00:06.633 ********** 2026-04-05 02:35:29.401349 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 02:35:29.401356 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 02:35:29.401363 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 02:35:29.401369 | orchestrator | 2026-04-05 02:35:29.401376 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 02:35:29.401383 | orchestrator | Sunday 05 April 2026 02:35:25 +0000 (0:00:00.678) 0:00:07.312 ********** 2026-04-05 02:35:29.401390 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-05 02:35:29.401397 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-05 02:35:29.401403 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-05 02:35:29.401410 | orchestrator | 2026-04-05 02:35:29.401418 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 02:35:29.401426 | orchestrator | Sunday 05 April 2026 02:35:26 +0000 (0:00:01.339) 0:00:08.652 ********** 2026-04-05 02:35:29.401433 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-05 02:35:29.401442 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:35:29.401462 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-05 02:35:29.401470 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:35:29.401478 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-05 02:35:29.401486 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:35:29.401494 | orchestrator | 2026-04-05 02:35:29.401502 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-05 02:35:29.401509 | orchestrator | Sunday 05 April 2026 02:35:27 +0000 (0:00:00.557) 0:00:09.209 ********** 2026-04-05 02:35:29.401524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:29.401537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:29.401546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:29.401558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:29.401565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:29.401578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:34.743816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:34.743888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:34.743903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:34.743915 | orchestrator | 2026-04-05 02:35:34.743927 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-05 02:35:34.743940 | orchestrator | Sunday 05 April 2026 02:35:29 +0000 (0:00:01.838) 0:00:11.047 ********** 2026-04-05 02:35:34.743951 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:35:34.743977 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:35:34.743988 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:35:34.744000 | orchestrator | 2026-04-05 02:35:34.744011 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-05 02:35:34.744021 | orchestrator | Sunday 05 April 2026 02:35:30 +0000 (0:00:00.930) 0:00:11.978 ********** 2026-04-05 02:35:34.744033 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-05 02:35:34.744044 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-05 02:35:34.744055 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-05 02:35:34.744066 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-05 02:35:34.744076 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-05 02:35:34.744087 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-05 02:35:34.744097 | orchestrator | 2026-04-05 02:35:34.744108 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-05 02:35:34.744119 | orchestrator | Sunday 05 April 2026 02:35:31 +0000 (0:00:01.498) 0:00:13.476 ********** 2026-04-05 02:35:34.744129 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:35:34.744140 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:35:34.744151 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:35:34.744161 | orchestrator | 2026-04-05 02:35:34.744172 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-05 02:35:34.744183 | orchestrator | Sunday 05 April 2026 02:35:32 +0000 (0:00:00.932) 0:00:14.409 ********** 2026-04-05 02:35:34.744194 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:35:34.744205 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:35:34.744216 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:35:34.744233 | orchestrator | 2026-04-05 02:35:34.744258 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-05 02:35:34.744284 | orchestrator | Sunday 05 April 2026 02:35:34 +0000 (0:00:01.351) 0:00:15.761 ********** 2026-04-05 02:35:34.744304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:35:34.744339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:35:34.744361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:34.744384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 02:35:34.744418 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:35:34.744442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:35:34.744506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:35:34.744521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:34.744534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 02:35:34.744547 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:35:34.744570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:35:37.673272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:35:37.673441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:37.673471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 02:35:37.673494 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:35:37.673516 | orchestrator | 2026-04-05 02:35:37.673530 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-05 02:35:37.673543 | orchestrator | Sunday 05 April 2026 02:35:34 +0000 (0:00:00.636) 0:00:16.397 ********** 2026-04-05 02:35:37.673555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:37.673568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:37.673579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:37.673638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:37.673652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:37.673664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 02:35:37.673675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:37.673736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:37.673750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 02:35:37.673799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:46.284492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:46.284632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c', '__omit_place_holder__e60f9ae76a1c64f1e5b1faf5f89b8e1347bf912c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 02:35:46.284650 | orchestrator | 2026-04-05 02:35:46.284664 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-05 02:35:46.284677 | orchestrator | Sunday 05 April 2026 02:35:37 +0000 (0:00:02.929) 0:00:19.326 ********** 2026-04-05 02:35:46.284713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:46.284727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:46.284739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:46.284789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:46.284820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:46.284833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:46.284844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:46.284856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:46.284868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:46.284879 | orchestrator | 2026-04-05 02:35:46.284890 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-05 02:35:46.284901 | orchestrator | Sunday 05 April 2026 02:35:40 +0000 (0:00:03.212) 0:00:22.539 ********** 2026-04-05 02:35:46.284921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 02:35:46.284933 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 02:35:46.284944 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 02:35:46.284955 | orchestrator | 2026-04-05 02:35:46.284966 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-05 02:35:46.284977 | orchestrator | Sunday 05 April 2026 02:35:42 +0000 (0:00:01.914) 0:00:24.453 ********** 2026-04-05 02:35:46.284988 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 02:35:46.285000 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 02:35:46.285011 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 02:35:46.285021 | orchestrator | 2026-04-05 02:35:46.285032 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-05 02:35:46.285043 | orchestrator | Sunday 05 April 2026 02:35:45 +0000 (0:00:02.910) 0:00:27.363 ********** 2026-04-05 02:35:46.285054 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:35:46.285067 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:35:46.285078 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:35:46.285090 | orchestrator | 2026-04-05 02:35:46.285108 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-05 02:35:58.567362 | orchestrator | Sunday 05 April 2026 02:35:46 +0000 (0:00:00.579) 0:00:27.942 ********** 2026-04-05 02:35:58.567471 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 02:35:58.567501 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 02:35:58.567513 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 02:35:58.567524 | orchestrator | 2026-04-05 02:35:58.567536 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-05 02:35:58.567548 | orchestrator | Sunday 05 April 2026 02:35:48 +0000 (0:00:02.239) 0:00:30.182 ********** 2026-04-05 02:35:58.567560 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 02:35:58.567571 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 02:35:58.567582 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 02:35:58.567593 | orchestrator | 2026-04-05 02:35:58.567603 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-05 02:35:58.567614 | orchestrator | Sunday 05 April 2026 02:35:50 +0000 (0:00:02.310) 0:00:32.492 ********** 2026-04-05 02:35:58.567626 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-05 02:35:58.567637 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-05 02:35:58.567648 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-05 02:35:58.567659 | orchestrator | 2026-04-05 02:35:58.567683 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-05 02:35:58.567743 | orchestrator | Sunday 05 April 2026 02:35:52 +0000 (0:00:01.437) 0:00:33.930 ********** 2026-04-05 02:35:58.567756 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-05 02:35:58.567767 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-05 02:35:58.567778 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-05 02:35:58.567789 | orchestrator | 2026-04-05 02:35:58.567823 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 02:35:58.567835 | orchestrator | Sunday 05 April 2026 02:35:53 +0000 (0:00:01.521) 0:00:35.451 ********** 2026-04-05 02:35:58.567846 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:35:58.567857 | orchestrator | 2026-04-05 02:35:58.567868 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-05 02:35:58.567879 | orchestrator | Sunday 05 April 2026 02:35:54 +0000 (0:00:00.589) 0:00:36.041 ********** 2026-04-05 02:35:58.567892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:58.567907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:58.567924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 02:35:58.567956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:58.567969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:58.567981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:35:58.568001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:58.568013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:58.568024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:35:58.568035 | orchestrator | 2026-04-05 02:35:58.568047 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-05 02:35:58.568058 | orchestrator | Sunday 05 April 2026 02:35:57 +0000 (0:00:03.562) 0:00:39.603 ********** 2026-04-05 02:35:58.568083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:35:59.481192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:35:59.481320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:59.481360 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:35:59.481376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:35:59.481388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:35:59.481400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:59.481411 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:35:59.481436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:35:59.481468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:35:59.481480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:59.481499 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:35:59.481511 | orchestrator | 2026-04-05 02:35:59.481523 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-05 02:35:59.481536 | orchestrator | Sunday 05 April 2026 02:35:58 +0000 (0:00:00.626) 0:00:40.229 ********** 2026-04-05 02:35:59.481549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:35:59.481560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:35:59.481572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:35:59.481583 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:35:59.481595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:35:59.481620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:00.449926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:00.450125 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:00.450149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:36:00.450163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:00.450175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:00.450186 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:00.450197 | orchestrator | 2026-04-05 02:36:00.450210 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 02:36:00.450222 | orchestrator | Sunday 05 April 2026 02:35:59 +0000 (0:00:00.903) 0:00:41.133 ********** 2026-04-05 02:36:00.450233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:36:00.450246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:00.450277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:00.450385 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:00.450403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:36:00.450452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:00.450466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:00.450479 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:00.450492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:36:00.450522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:00.450541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:00.450573 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:01.877579 | orchestrator | 2026-04-05 02:36:01.877646 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 02:36:01.877653 | orchestrator | Sunday 05 April 2026 02:36:00 +0000 (0:00:00.964) 0:00:42.098 ********** 2026-04-05 02:36:01.877660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:36:01.877669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:01.877674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:01.877679 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:01.877685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:36:01.877689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:01.877739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:01.877759 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:01.877775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:36:01.877780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:01.877784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:01.877787 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:01.877791 | orchestrator | 2026-04-05 02:36:01.877795 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 02:36:01.877800 | orchestrator | Sunday 05 April 2026 02:36:01 +0000 (0:00:00.617) 0:00:42.716 ********** 2026-04-05 02:36:01.877804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:36:01.877808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:01.877821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:01.877825 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:01.877834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:36:03.067004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:03.067074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:03.067080 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:03.067086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:36:03.067090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:03.067094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:03.067116 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:03.067120 | orchestrator | 2026-04-05 02:36:03.067125 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-05 02:36:03.067131 | orchestrator | Sunday 05 April 2026 02:36:01 +0000 (0:00:00.818) 0:00:43.535 ********** 2026-04-05 02:36:03.067145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:36:03.067160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:03.067164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:03.067168 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:03.067172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:36:03.067176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:03.067184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:03.067188 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:03.067195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:36:03.067202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:04.586197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:04.586310 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:04.586330 | orchestrator | 2026-04-05 02:36:04.586343 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-05 02:36:04.586355 | orchestrator | Sunday 05 April 2026 02:36:03 +0000 (0:00:01.180) 0:00:44.715 ********** 2026-04-05 02:36:04.586369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:36:04.586382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:04.586458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:04.586473 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:04.586485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:36:04.586511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:04.586542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:04.586555 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:04.586566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:36:04.586578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:04.586599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:04.586610 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:04.586621 | orchestrator | 2026-04-05 02:36:04.586633 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-05 02:36:04.586644 | orchestrator | Sunday 05 April 2026 02:36:03 +0000 (0:00:00.635) 0:00:45.351 ********** 2026-04-05 02:36:04.586656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 02:36:04.586668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:04.586717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:11.239360 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:11.239512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 02:36:11.239547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:11.239601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:11.239624 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:11.239644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 02:36:11.239674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 02:36:11.239687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 02:36:11.239803 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:11.239827 | orchestrator | 2026-04-05 02:36:11.239847 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-05 02:36:11.239868 | orchestrator | Sunday 05 April 2026 02:36:04 +0000 (0:00:00.888) 0:00:46.240 ********** 2026-04-05 02:36:11.239888 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 02:36:11.239936 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 02:36:11.239957 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 02:36:11.239977 | orchestrator | 2026-04-05 02:36:11.239995 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-05 02:36:11.240016 | orchestrator | Sunday 05 April 2026 02:36:06 +0000 (0:00:01.742) 0:00:47.982 ********** 2026-04-05 02:36:11.240036 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 02:36:11.240055 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 02:36:11.240074 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 02:36:11.240086 | orchestrator | 2026-04-05 02:36:11.240110 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-05 02:36:11.240121 | orchestrator | Sunday 05 April 2026 02:36:07 +0000 (0:00:01.683) 0:00:49.666 ********** 2026-04-05 02:36:11.240132 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 02:36:11.240143 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 02:36:11.240154 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 02:36:11.240178 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 02:36:11.240189 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:11.240201 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 02:36:11.240211 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:11.240222 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 02:36:11.240233 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:11.240244 | orchestrator | 2026-04-05 02:36:11.240255 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-05 02:36:11.240265 | orchestrator | Sunday 05 April 2026 02:36:08 +0000 (0:00:00.810) 0:00:50.476 ********** 2026-04-05 02:36:11.240278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 02:36:11.240290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 02:36:11.240310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 02:36:11.240335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:36:15.395463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:36:15.395573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 02:36:15.395595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:36:15.395615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:36:15.395655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 02:36:15.395678 | orchestrator | 2026-04-05 02:36:15.395765 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-05 02:36:15.395781 | orchestrator | Sunday 05 April 2026 02:36:11 +0000 (0:00:02.417) 0:00:52.894 ********** 2026-04-05 02:36:15.395793 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:36:15.395804 | orchestrator | 2026-04-05 02:36:15.395815 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-05 02:36:15.395826 | orchestrator | Sunday 05 April 2026 02:36:12 +0000 (0:00:00.817) 0:00:53.711 ********** 2026-04-05 02:36:15.395862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 02:36:15.395897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 02:36:15.395910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:15.395922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 02:36:15.395934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 02:36:15.395953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 02:36:15.395976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 02:36:16.076323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:16.076455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 02:36:16.076472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 02:36:16.076486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:16.076515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 02:36:16.076578 | orchestrator | 2026-04-05 02:36:16.076595 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-05 02:36:16.076608 | orchestrator | Sunday 05 April 2026 02:36:15 +0000 (0:00:03.341) 0:00:57.052 ********** 2026-04-05 02:36:16.076621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 02:36:16.076672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 02:36:16.076686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:16.076762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 02:36:16.076776 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:16.076789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 02:36:16.076807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 02:36:16.076827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:16.076848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 02:36:24.590355 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:24.590460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 02:36:24.590478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 02:36:24.590490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:24.590501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 02:36:24.590535 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:24.590547 | orchestrator | 2026-04-05 02:36:24.590558 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-05 02:36:24.590570 | orchestrator | Sunday 05 April 2026 02:36:16 +0000 (0:00:00.680) 0:00:57.733 ********** 2026-04-05 02:36:24.590581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-05 02:36:24.590593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-05 02:36:24.590604 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:24.590629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-05 02:36:24.590640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-05 02:36:24.590650 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:24.590659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-05 02:36:24.590686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-05 02:36:24.590696 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:24.590758 | orchestrator | 2026-04-05 02:36:24.590769 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-05 02:36:24.590779 | orchestrator | Sunday 05 April 2026 02:36:17 +0000 (0:00:01.086) 0:00:58.819 ********** 2026-04-05 02:36:24.590789 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:36:24.590799 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:36:24.590809 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:36:24.590818 | orchestrator | 2026-04-05 02:36:24.590829 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-05 02:36:24.590839 | orchestrator | Sunday 05 April 2026 02:36:18 +0000 (0:00:01.357) 0:01:00.176 ********** 2026-04-05 02:36:24.590849 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:36:24.590858 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:36:24.590868 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:36:24.590877 | orchestrator | 2026-04-05 02:36:24.590888 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-05 02:36:24.590900 | orchestrator | Sunday 05 April 2026 02:36:20 +0000 (0:00:02.111) 0:01:02.287 ********** 2026-04-05 02:36:24.590911 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:36:24.590922 | orchestrator | 2026-04-05 02:36:24.590933 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-05 02:36:24.590945 | orchestrator | Sunday 05 April 2026 02:36:21 +0000 (0:00:00.607) 0:01:02.895 ********** 2026-04-05 02:36:24.590958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 02:36:24.590986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:24.591000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:24.591020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 02:36:25.215299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 02:36:25.215479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215505 | orchestrator | 2026-04-05 02:36:25.215519 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-05 02:36:25.215532 | orchestrator | Sunday 05 April 2026 02:36:24 +0000 (0:00:03.349) 0:01:06.244 ********** 2026-04-05 02:36:25.215562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 02:36:25.215576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215607 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:25.215625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 02:36:25.215637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:25.215660 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:25.215680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 02:36:35.046935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 02:36:35.047040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:35.047055 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:35.047068 | orchestrator | 2026-04-05 02:36:35.047078 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-05 02:36:35.047089 | orchestrator | Sunday 05 April 2026 02:36:25 +0000 (0:00:00.624) 0:01:06.868 ********** 2026-04-05 02:36:35.047113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 02:36:35.047124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 02:36:35.047135 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:35.047144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 02:36:35.047153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 02:36:35.047162 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:35.047171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 02:36:35.047180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-05 02:36:35.047189 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:35.047197 | orchestrator | 2026-04-05 02:36:35.047206 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-05 02:36:35.047215 | orchestrator | Sunday 05 April 2026 02:36:26 +0000 (0:00:00.847) 0:01:07.716 ********** 2026-04-05 02:36:35.047224 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:36:35.047233 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:36:35.047242 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:36:35.047251 | orchestrator | 2026-04-05 02:36:35.047260 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-05 02:36:35.047268 | orchestrator | Sunday 05 April 2026 02:36:27 +0000 (0:00:01.566) 0:01:09.283 ********** 2026-04-05 02:36:35.047297 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:36:35.047307 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:36:35.047315 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:36:35.047324 | orchestrator | 2026-04-05 02:36:35.047332 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-05 02:36:35.047341 | orchestrator | Sunday 05 April 2026 02:36:29 +0000 (0:00:02.147) 0:01:11.431 ********** 2026-04-05 02:36:35.047350 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:35.047358 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:35.047367 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:35.047375 | orchestrator | 2026-04-05 02:36:35.047384 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-05 02:36:35.047392 | orchestrator | Sunday 05 April 2026 02:36:30 +0000 (0:00:00.307) 0:01:11.739 ********** 2026-04-05 02:36:35.047401 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:36:35.047410 | orchestrator | 2026-04-05 02:36:35.047418 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-05 02:36:35.047443 | orchestrator | Sunday 05 April 2026 02:36:30 +0000 (0:00:00.684) 0:01:12.423 ********** 2026-04-05 02:36:35.047456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 02:36:35.047472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 02:36:35.047484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 02:36:35.047495 | orchestrator | 2026-04-05 02:36:35.047506 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-05 02:36:35.047518 | orchestrator | Sunday 05 April 2026 02:36:33 +0000 (0:00:02.872) 0:01:15.295 ********** 2026-04-05 02:36:35.047537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 02:36:35.047549 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:35.047567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 02:36:42.796250 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:42.796392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 02:36:42.796416 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:42.796428 | orchestrator | 2026-04-05 02:36:42.796441 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-05 02:36:42.796453 | orchestrator | Sunday 05 April 2026 02:36:35 +0000 (0:00:01.409) 0:01:16.705 ********** 2026-04-05 02:36:42.796483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-05 02:36:42.796497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-05 02:36:42.796509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-05 02:36:42.796547 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:42.796559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-05 02:36:42.796571 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:42.796582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-05 02:36:42.796594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-05 02:36:42.796605 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:42.796616 | orchestrator | 2026-04-05 02:36:42.796627 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-05 02:36:42.796638 | orchestrator | Sunday 05 April 2026 02:36:36 +0000 (0:00:01.696) 0:01:18.402 ********** 2026-04-05 02:36:42.796649 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:42.796659 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:42.796670 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:42.796681 | orchestrator | 2026-04-05 02:36:42.796698 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-05 02:36:42.796770 | orchestrator | Sunday 05 April 2026 02:36:37 +0000 (0:00:00.430) 0:01:18.833 ********** 2026-04-05 02:36:42.796792 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:42.796812 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:42.796827 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:42.796840 | orchestrator | 2026-04-05 02:36:42.796854 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-05 02:36:42.796867 | orchestrator | Sunday 05 April 2026 02:36:38 +0000 (0:00:01.279) 0:01:20.112 ********** 2026-04-05 02:36:42.796887 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:36:42.796915 | orchestrator | 2026-04-05 02:36:42.796937 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-05 02:36:42.796955 | orchestrator | Sunday 05 April 2026 02:36:39 +0000 (0:00:01.035) 0:01:21.147 ********** 2026-04-05 02:36:42.796983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 02:36:42.797019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:36:42.797041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 02:36:42.797064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 02:36:42.797101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 02:36:43.448018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 02:36:43.448208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448279 | orchestrator | 2026-04-05 02:36:43.448293 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-05 02:36:43.448306 | orchestrator | Sunday 05 April 2026 02:36:42 +0000 (0:00:03.388) 0:01:24.536 ********** 2026-04-05 02:36:43.448319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 02:36:43.448331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 02:36:43.448366 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:43.448395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 02:36:49.905131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:36:49.905234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 02:36:49.905254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 02:36:49.905267 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:49.905281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 02:36:49.905294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:36:49.905360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 02:36:49.905376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 02:36:49.905387 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:49.905398 | orchestrator | 2026-04-05 02:36:49.905410 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-05 02:36:49.905422 | orchestrator | Sunday 05 April 2026 02:36:43 +0000 (0:00:00.671) 0:01:25.208 ********** 2026-04-05 02:36:49.905434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 02:36:49.905445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 02:36:49.905458 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:49.905470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 02:36:49.905480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 02:36:49.905491 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:49.905502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 02:36:49.905513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-05 02:36:49.905525 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:49.905536 | orchestrator | 2026-04-05 02:36:49.905546 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-05 02:36:49.905558 | orchestrator | Sunday 05 April 2026 02:36:44 +0000 (0:00:01.312) 0:01:26.520 ********** 2026-04-05 02:36:49.905568 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:36:49.905613 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:36:49.905621 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:36:49.905627 | orchestrator | 2026-04-05 02:36:49.905634 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-05 02:36:49.905641 | orchestrator | Sunday 05 April 2026 02:36:46 +0000 (0:00:01.319) 0:01:27.840 ********** 2026-04-05 02:36:49.905647 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:36:49.905655 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:36:49.905661 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:36:49.905668 | orchestrator | 2026-04-05 02:36:49.905675 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-05 02:36:49.905682 | orchestrator | Sunday 05 April 2026 02:36:48 +0000 (0:00:02.053) 0:01:29.894 ********** 2026-04-05 02:36:49.905690 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:49.905698 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:49.905734 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:49.905749 | orchestrator | 2026-04-05 02:36:49.905767 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-05 02:36:49.905778 | orchestrator | Sunday 05 April 2026 02:36:48 +0000 (0:00:00.300) 0:01:30.194 ********** 2026-04-05 02:36:49.905789 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:49.905800 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:49.905811 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:36:49.905822 | orchestrator | 2026-04-05 02:36:49.905832 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-05 02:36:49.905843 | orchestrator | Sunday 05 April 2026 02:36:48 +0000 (0:00:00.359) 0:01:30.554 ********** 2026-04-05 02:36:49.905854 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:36:49.905866 | orchestrator | 2026-04-05 02:36:49.905878 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-05 02:36:49.905899 | orchestrator | Sunday 05 April 2026 02:36:49 +0000 (0:00:01.009) 0:01:31.564 ********** 2026-04-05 02:36:53.306786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 02:36:53.306864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 02:36:53.306873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 02:36:53.306945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 02:36:53.306953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 02:36:53.306975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 02:36:54.151917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 02:36:54.152063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152191 | orchestrator | 2026-04-05 02:36:54.152205 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-05 02:36:54.152218 | orchestrator | Sunday 05 April 2026 02:36:53 +0000 (0:00:03.612) 0:01:35.176 ********** 2026-04-05 02:36:54.152230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 02:36:54.152242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 02:36:54.152255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.152306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.644370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.644490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.644505 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:36:54.644518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 02:36:54.644529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 02:36:54.644972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.645007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.645037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.645059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.645073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.645083 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:36:54.645094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 02:36:54.645105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 02:36:54.645116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 02:36:54.645139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 02:37:04.706155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 02:37:04.706266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:37:04.706278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 02:37:04.706287 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:04.706296 | orchestrator | 2026-04-05 02:37:04.706304 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-05 02:37:04.706312 | orchestrator | Sunday 05 April 2026 02:36:54 +0000 (0:00:01.124) 0:01:36.300 ********** 2026-04-05 02:37:04.706320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-05 02:37:04.706330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-05 02:37:04.706338 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:04.706345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-05 02:37:04.706352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-05 02:37:04.706358 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:04.706364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-05 02:37:04.706391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-05 02:37:04.706399 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:04.706406 | orchestrator | 2026-04-05 02:37:04.706412 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-05 02:37:04.706419 | orchestrator | Sunday 05 April 2026 02:36:55 +0000 (0:00:01.338) 0:01:37.638 ********** 2026-04-05 02:37:04.706427 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:04.706434 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:04.706441 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:04.706447 | orchestrator | 2026-04-05 02:37:04.706454 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-05 02:37:04.706461 | orchestrator | Sunday 05 April 2026 02:36:57 +0000 (0:00:01.388) 0:01:39.027 ********** 2026-04-05 02:37:04.706468 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:04.706474 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:04.706481 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:04.706487 | orchestrator | 2026-04-05 02:37:04.706494 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-05 02:37:04.706501 | orchestrator | Sunday 05 April 2026 02:36:59 +0000 (0:00:02.067) 0:01:41.094 ********** 2026-04-05 02:37:04.706522 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:04.706530 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:04.706537 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:04.706543 | orchestrator | 2026-04-05 02:37:04.706550 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-05 02:37:04.706556 | orchestrator | Sunday 05 April 2026 02:36:59 +0000 (0:00:00.302) 0:01:41.396 ********** 2026-04-05 02:37:04.706563 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:37:04.706569 | orchestrator | 2026-04-05 02:37:04.706575 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-05 02:37:04.706581 | orchestrator | Sunday 05 April 2026 02:37:00 +0000 (0:00:01.054) 0:01:42.451 ********** 2026-04-05 02:37:04.706593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 02:37:04.706603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 02:37:04.706625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 02:37:07.754390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 02:37:07.754560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 02:37:07.754617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 02:37:07.754649 | orchestrator | 2026-04-05 02:37:07.754668 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-05 02:37:07.754687 | orchestrator | Sunday 05 April 2026 02:37:04 +0000 (0:00:04.030) 0:01:46.482 ********** 2026-04-05 02:37:07.754771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 02:37:07.754810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 02:37:11.428854 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:11.428979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 02:37:11.429016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 02:37:11.429050 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:11.429084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 02:37:11.429103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 02:37:11.429125 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:11.429136 | orchestrator | 2026-04-05 02:37:11.429149 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-05 02:37:11.429178 | orchestrator | Sunday 05 April 2026 02:37:07 +0000 (0:00:03.049) 0:01:49.532 ********** 2026-04-05 02:37:11.429191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 02:37:11.429225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 02:37:19.781449 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:19.781561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 02:37:19.781582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 02:37:19.781596 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:19.781608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 02:37:19.781638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 02:37:19.781650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:19.781662 | orchestrator | 2026-04-05 02:37:19.781675 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-05 02:37:19.781687 | orchestrator | Sunday 05 April 2026 02:37:11 +0000 (0:00:03.549) 0:01:53.081 ********** 2026-04-05 02:37:19.781773 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:19.781787 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:19.781798 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:19.781809 | orchestrator | 2026-04-05 02:37:19.781822 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-05 02:37:19.781842 | orchestrator | Sunday 05 April 2026 02:37:12 +0000 (0:00:01.360) 0:01:54.441 ********** 2026-04-05 02:37:19.781860 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:19.781878 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:19.781929 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:19.781948 | orchestrator | 2026-04-05 02:37:19.781967 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-05 02:37:19.781988 | orchestrator | Sunday 05 April 2026 02:37:14 +0000 (0:00:02.061) 0:01:56.502 ********** 2026-04-05 02:37:19.782008 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:19.782091 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:19.782105 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:19.782118 | orchestrator | 2026-04-05 02:37:19.782131 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-05 02:37:19.782143 | orchestrator | Sunday 05 April 2026 02:37:15 +0000 (0:00:00.309) 0:01:56.811 ********** 2026-04-05 02:37:19.782155 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:37:19.782168 | orchestrator | 2026-04-05 02:37:19.782180 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-05 02:37:19.782193 | orchestrator | Sunday 05 April 2026 02:37:16 +0000 (0:00:01.056) 0:01:57.868 ********** 2026-04-05 02:37:19.782225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 02:37:19.782241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 02:37:19.782253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 02:37:19.782265 | orchestrator | 2026-04-05 02:37:19.782276 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-05 02:37:19.782301 | orchestrator | Sunday 05 April 2026 02:37:19 +0000 (0:00:02.948) 0:02:00.817 ********** 2026-04-05 02:37:19.782313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 02:37:19.782326 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:19.782338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 02:37:19.782349 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:19.782361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 02:37:19.782442 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:19.782464 | orchestrator | 2026-04-05 02:37:19.782475 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-05 02:37:19.782486 | orchestrator | Sunday 05 April 2026 02:37:19 +0000 (0:00:00.402) 0:02:01.220 ********** 2026-04-05 02:37:19.782497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-05 02:37:19.782520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-05 02:37:28.684913 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:28.685026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-05 02:37:28.685045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-05 02:37:28.685058 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:28.685069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-05 02:37:28.685080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-05 02:37:28.685114 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:28.685125 | orchestrator | 2026-04-05 02:37:28.685137 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-05 02:37:28.685149 | orchestrator | Sunday 05 April 2026 02:37:20 +0000 (0:00:00.909) 0:02:02.129 ********** 2026-04-05 02:37:28.685160 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:28.685171 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:28.685181 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:28.685192 | orchestrator | 2026-04-05 02:37:28.685203 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-05 02:37:28.685213 | orchestrator | Sunday 05 April 2026 02:37:21 +0000 (0:00:01.347) 0:02:03.477 ********** 2026-04-05 02:37:28.685224 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:28.685235 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:28.685245 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:28.685256 | orchestrator | 2026-04-05 02:37:28.685267 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-05 02:37:28.685292 | orchestrator | Sunday 05 April 2026 02:37:23 +0000 (0:00:02.002) 0:02:05.480 ********** 2026-04-05 02:37:28.685303 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:28.685313 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:28.685324 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:28.685335 | orchestrator | 2026-04-05 02:37:28.685345 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-05 02:37:28.685356 | orchestrator | Sunday 05 April 2026 02:37:24 +0000 (0:00:00.314) 0:02:05.795 ********** 2026-04-05 02:37:28.685367 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:37:28.685377 | orchestrator | 2026-04-05 02:37:28.685388 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-05 02:37:28.685399 | orchestrator | Sunday 05 April 2026 02:37:25 +0000 (0:00:01.142) 0:02:06.937 ********** 2026-04-05 02:37:28.685434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 02:37:28.685467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 02:37:28.685489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 02:37:30.397663 | orchestrator | 2026-04-05 02:37:30.397804 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-05 02:37:30.397824 | orchestrator | Sunday 05 April 2026 02:37:28 +0000 (0:00:03.404) 0:02:10.342 ********** 2026-04-05 02:37:30.397861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 02:37:30.397879 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:30.397905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 02:37:30.397931 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:30.397945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 02:37:30.397965 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:30.397972 | orchestrator | 2026-04-05 02:37:30.397979 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-05 02:37:30.397986 | orchestrator | Sunday 05 April 2026 02:37:29 +0000 (0:00:00.689) 0:02:11.031 ********** 2026-04-05 02:37:30.397994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 02:37:30.398009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 02:37:30.398060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 02:37:30.398076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 02:37:39.264534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 02:37:39.264634 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:39.264646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 02:37:39.264657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 02:37:39.264678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 02:37:39.264695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 02:37:39.264710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 02:37:39.264776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 02:37:39.264786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 02:37:39.264793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-05 02:37:39.264816 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:39.264823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 02:37:39.264829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 02:37:39.264835 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:39.264842 | orchestrator | 2026-04-05 02:37:39.264849 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-05 02:37:39.264857 | orchestrator | Sunday 05 April 2026 02:37:30 +0000 (0:00:01.023) 0:02:12.054 ********** 2026-04-05 02:37:39.264863 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:39.264869 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:39.264875 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:39.264882 | orchestrator | 2026-04-05 02:37:39.264888 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-05 02:37:39.264894 | orchestrator | Sunday 05 April 2026 02:37:32 +0000 (0:00:01.653) 0:02:13.708 ********** 2026-04-05 02:37:39.264901 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:39.264907 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:39.264914 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:39.264920 | orchestrator | 2026-04-05 02:37:39.264926 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-05 02:37:39.264932 | orchestrator | Sunday 05 April 2026 02:37:34 +0000 (0:00:02.087) 0:02:15.795 ********** 2026-04-05 02:37:39.264939 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:39.264945 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:39.264964 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:39.264971 | orchestrator | 2026-04-05 02:37:39.264977 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-05 02:37:39.264983 | orchestrator | Sunday 05 April 2026 02:37:34 +0000 (0:00:00.341) 0:02:16.137 ********** 2026-04-05 02:37:39.264989 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:39.264995 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:39.265002 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:39.265008 | orchestrator | 2026-04-05 02:37:39.265014 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-05 02:37:39.265021 | orchestrator | Sunday 05 April 2026 02:37:34 +0000 (0:00:00.325) 0:02:16.463 ********** 2026-04-05 02:37:39.265027 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:37:39.265033 | orchestrator | 2026-04-05 02:37:39.265039 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-05 02:37:39.265046 | orchestrator | Sunday 05 April 2026 02:37:35 +0000 (0:00:01.182) 0:02:17.646 ********** 2026-04-05 02:37:39.265059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 02:37:39.265074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 02:37:39.265082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 02:37:39.265089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 02:37:39.265102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 02:37:39.876087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 02:37:39.876191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 02:37:39.876226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 02:37:39.876238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 02:37:39.876249 | orchestrator | 2026-04-05 02:37:39.876261 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-05 02:37:39.876272 | orchestrator | Sunday 05 April 2026 02:37:39 +0000 (0:00:03.270) 0:02:20.916 ********** 2026-04-05 02:37:39.876301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 02:37:39.876319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 02:37:39.876338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 02:37:39.876364 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:39.876398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 02:37:39.876416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 02:37:39.876433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 02:37:39.876451 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:39.876489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 02:37:49.486325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 02:37:49.487362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 02:37:49.487437 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:49.487456 | orchestrator | 2026-04-05 02:37:49.487469 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-05 02:37:49.487482 | orchestrator | Sunday 05 April 2026 02:37:39 +0000 (0:00:00.611) 0:02:21.527 ********** 2026-04-05 02:37:49.487494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 02:37:49.487509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 02:37:49.487521 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:49.487534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 02:37:49.487546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 02:37:49.487557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:49.487569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 02:37:49.487581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-05 02:37:49.487592 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:49.487603 | orchestrator | 2026-04-05 02:37:49.487614 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-05 02:37:49.487625 | orchestrator | Sunday 05 April 2026 02:37:40 +0000 (0:00:01.083) 0:02:22.610 ********** 2026-04-05 02:37:49.487636 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:49.487647 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:49.487690 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:49.487702 | orchestrator | 2026-04-05 02:37:49.487713 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-05 02:37:49.487750 | orchestrator | Sunday 05 April 2026 02:37:42 +0000 (0:00:01.379) 0:02:23.990 ********** 2026-04-05 02:37:49.487761 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:49.487772 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:49.487783 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:49.487794 | orchestrator | 2026-04-05 02:37:49.487805 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-05 02:37:49.487817 | orchestrator | Sunday 05 April 2026 02:37:44 +0000 (0:00:02.235) 0:02:26.226 ********** 2026-04-05 02:37:49.487827 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:49.487853 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:49.487865 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:49.487876 | orchestrator | 2026-04-05 02:37:49.487887 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-05 02:37:49.487922 | orchestrator | Sunday 05 April 2026 02:37:44 +0000 (0:00:00.327) 0:02:26.553 ********** 2026-04-05 02:37:49.487934 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:37:49.487946 | orchestrator | 2026-04-05 02:37:49.487957 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-05 02:37:49.487985 | orchestrator | Sunday 05 April 2026 02:37:46 +0000 (0:00:01.257) 0:02:27.810 ********** 2026-04-05 02:37:49.488010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 02:37:49.488026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:37:49.488039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 02:37:49.488060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:37:49.488082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 02:37:54.971017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:37:54.971166 | orchestrator | 2026-04-05 02:37:54.971193 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-05 02:37:54.971211 | orchestrator | Sunday 05 April 2026 02:37:49 +0000 (0:00:03.328) 0:02:31.138 ********** 2026-04-05 02:37:54.971230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 02:37:54.971306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:37:54.971353 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:54.971378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 02:37:54.971417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:37:54.971433 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:54.971446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 02:37:54.971462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:37:54.971487 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:54.971500 | orchestrator | 2026-04-05 02:37:54.971515 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-05 02:37:54.971529 | orchestrator | Sunday 05 April 2026 02:37:50 +0000 (0:00:00.821) 0:02:31.960 ********** 2026-04-05 02:37:54.971545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-05 02:37:54.971561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-05 02:37:54.971577 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:37:54.971592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-05 02:37:54.971607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-05 02:37:54.971622 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:37:54.971637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-05 02:37:54.971651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-05 02:37:54.971667 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:37:54.971682 | orchestrator | 2026-04-05 02:37:54.971702 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-05 02:37:54.971715 | orchestrator | Sunday 05 April 2026 02:37:51 +0000 (0:00:00.872) 0:02:32.832 ********** 2026-04-05 02:37:54.971797 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:54.971812 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:54.971826 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:54.971841 | orchestrator | 2026-04-05 02:37:54.971855 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-05 02:37:54.971870 | orchestrator | Sunday 05 April 2026 02:37:52 +0000 (0:00:01.656) 0:02:34.489 ********** 2026-04-05 02:37:54.971886 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:37:54.971925 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:37:54.971941 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:37:54.971955 | orchestrator | 2026-04-05 02:37:54.971970 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-05 02:37:54.971999 | orchestrator | Sunday 05 April 2026 02:37:54 +0000 (0:00:02.138) 0:02:36.628 ********** 2026-04-05 02:37:59.444026 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:37:59.444098 | orchestrator | 2026-04-05 02:37:59.444105 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-05 02:37:59.444110 | orchestrator | Sunday 05 April 2026 02:37:56 +0000 (0:00:01.089) 0:02:37.717 ********** 2026-04-05 02:37:59.444118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 02:37:59.444144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 02:37:59.444150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 02:37:59.444190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 02:37:59.444233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.552629 | orchestrator | 2026-04-05 02:38:00.552777 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-05 02:38:00.552798 | orchestrator | Sunday 05 April 2026 02:37:59 +0000 (0:00:03.472) 0:02:41.190 ********** 2026-04-05 02:38:00.552844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 02:38:00.552864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.552881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.552897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.552912 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:00.552945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 02:38:00.552982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.553008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.553024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.553039 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:00.553054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 02:38:00.553068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.553089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 02:38:00.553114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 02:38:12.111248 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:12.111355 | orchestrator | 2026-04-05 02:38:12.111374 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-05 02:38:12.111388 | orchestrator | Sunday 05 April 2026 02:38:00 +0000 (0:00:01.108) 0:02:42.298 ********** 2026-04-05 02:38:12.111400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-05 02:38:12.111413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-05 02:38:12.111426 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:12.111439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-05 02:38:12.111451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-05 02:38:12.111462 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:12.111473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-05 02:38:12.111484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-05 02:38:12.111495 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:12.111506 | orchestrator | 2026-04-05 02:38:12.111517 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-05 02:38:12.111528 | orchestrator | Sunday 05 April 2026 02:38:01 +0000 (0:00:00.930) 0:02:43.229 ********** 2026-04-05 02:38:12.111539 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:38:12.111551 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:38:12.111561 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:38:12.111572 | orchestrator | 2026-04-05 02:38:12.111583 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-05 02:38:12.111594 | orchestrator | Sunday 05 April 2026 02:38:02 +0000 (0:00:01.397) 0:02:44.627 ********** 2026-04-05 02:38:12.111605 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:38:12.111635 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:38:12.111657 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:38:12.111668 | orchestrator | 2026-04-05 02:38:12.111679 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-05 02:38:12.111690 | orchestrator | Sunday 05 April 2026 02:38:05 +0000 (0:00:02.140) 0:02:46.768 ********** 2026-04-05 02:38:12.111701 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:38:12.111711 | orchestrator | 2026-04-05 02:38:12.111722 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-05 02:38:12.111753 | orchestrator | Sunday 05 April 2026 02:38:06 +0000 (0:00:01.418) 0:02:48.187 ********** 2026-04-05 02:38:12.111765 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 02:38:12.111776 | orchestrator | 2026-04-05 02:38:12.111787 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-05 02:38:12.111817 | orchestrator | Sunday 05 April 2026 02:38:09 +0000 (0:00:03.211) 0:02:51.398 ********** 2026-04-05 02:38:12.111862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:38:12.111880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 02:38:12.111893 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:12.111910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:38:12.111931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 02:38:12.111942 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:12.111964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:38:14.569034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 02:38:14.569137 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:14.569158 | orchestrator | 2026-04-05 02:38:14.569167 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-05 02:38:14.569176 | orchestrator | Sunday 05 April 2026 02:38:12 +0000 (0:00:02.361) 0:02:53.760 ********** 2026-04-05 02:38:14.569222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:38:14.569232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 02:38:14.569242 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:14.569278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:38:14.569319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 02:38:14.569333 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:14.569345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:38:14.569366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 02:38:24.501570 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:24.501669 | orchestrator | 2026-04-05 02:38:24.501684 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-05 02:38:24.501698 | orchestrator | Sunday 05 April 2026 02:38:14 +0000 (0:00:02.461) 0:02:56.222 ********** 2026-04-05 02:38:24.501712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 02:38:24.501814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 02:38:24.501853 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:24.501866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 02:38:24.501878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 02:38:24.501889 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:24.501900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 02:38:24.501912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 02:38:24.501923 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:24.501934 | orchestrator | 2026-04-05 02:38:24.501945 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-05 02:38:24.501956 | orchestrator | Sunday 05 April 2026 02:38:17 +0000 (0:00:02.877) 0:02:59.099 ********** 2026-04-05 02:38:24.501967 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:38:24.502004 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:38:24.502068 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:38:24.502084 | orchestrator | 2026-04-05 02:38:24.502098 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-05 02:38:24.502111 | orchestrator | Sunday 05 April 2026 02:38:19 +0000 (0:00:02.160) 0:03:01.260 ********** 2026-04-05 02:38:24.502124 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:24.502136 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:24.502148 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:24.502161 | orchestrator | 2026-04-05 02:38:24.502180 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-05 02:38:24.502198 | orchestrator | Sunday 05 April 2026 02:38:21 +0000 (0:00:01.471) 0:03:02.732 ********** 2026-04-05 02:38:24.502218 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:24.502238 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:24.502258 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:24.502278 | orchestrator | 2026-04-05 02:38:24.502294 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-05 02:38:24.502307 | orchestrator | Sunday 05 April 2026 02:38:21 +0000 (0:00:00.314) 0:03:03.047 ********** 2026-04-05 02:38:24.502319 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:38:24.502332 | orchestrator | 2026-04-05 02:38:24.502345 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-05 02:38:24.502358 | orchestrator | Sunday 05 April 2026 02:38:22 +0000 (0:00:01.393) 0:03:04.440 ********** 2026-04-05 02:38:24.502380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 02:38:24.502396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 02:38:24.502410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 02:38:24.502423 | orchestrator | 2026-04-05 02:38:24.502437 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-05 02:38:24.502468 | orchestrator | Sunday 05 April 2026 02:38:24 +0000 (0:00:01.504) 0:03:05.944 ********** 2026-04-05 02:38:24.502499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 02:38:33.169505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 02:38:33.169677 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:33.169704 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:33.169719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 02:38:33.169852 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:33.169869 | orchestrator | 2026-04-05 02:38:33.169883 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-05 02:38:33.169896 | orchestrator | Sunday 05 April 2026 02:38:24 +0000 (0:00:00.432) 0:03:06.377 ********** 2026-04-05 02:38:33.169909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 02:38:33.169923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 02:38:33.169934 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:33.169946 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:33.169957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 02:38:33.169996 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:33.170014 | orchestrator | 2026-04-05 02:38:33.170166 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-05 02:38:33.170193 | orchestrator | Sunday 05 April 2026 02:38:25 +0000 (0:00:00.898) 0:03:07.276 ********** 2026-04-05 02:38:33.170206 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:33.170224 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:33.170242 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:33.170259 | orchestrator | 2026-04-05 02:38:33.170275 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-05 02:38:33.170291 | orchestrator | Sunday 05 April 2026 02:38:26 +0000 (0:00:00.464) 0:03:07.740 ********** 2026-04-05 02:38:33.170308 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:33.170324 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:33.170339 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:33.170355 | orchestrator | 2026-04-05 02:38:33.170371 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-05 02:38:33.170387 | orchestrator | Sunday 05 April 2026 02:38:27 +0000 (0:00:01.332) 0:03:09.073 ********** 2026-04-05 02:38:33.170402 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:33.170418 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:33.170433 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:33.170450 | orchestrator | 2026-04-05 02:38:33.170466 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-05 02:38:33.170482 | orchestrator | Sunday 05 April 2026 02:38:27 +0000 (0:00:00.331) 0:03:09.404 ********** 2026-04-05 02:38:33.170498 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:38:33.170516 | orchestrator | 2026-04-05 02:38:33.170533 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-05 02:38:33.170548 | orchestrator | Sunday 05 April 2026 02:38:29 +0000 (0:00:01.516) 0:03:10.921 ********** 2026-04-05 02:38:33.170596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 02:38:33.170629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.170649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.170679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.170691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 02:38:33.170715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.350297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.350439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.350468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.350518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:33.350540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.350559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 02:38:33.350597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.350609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.350626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 02:38:33.350648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:33.350659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 02:38:33.350683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 02:38:33.460453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 02:38:33.460690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 02:38:33.460702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.460728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.460795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.692854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.692979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.692995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.693008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.693020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:33.693035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:33.693072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.693094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.693106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 02:38:33.693118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 02:38:33.693130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.693142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:33.693154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:33.693179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.770410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 02:38:34.770519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 02:38:34.770538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:34.770552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:34.770564 | orchestrator | 2026-04-05 02:38:34.770577 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-05 02:38:34.770613 | orchestrator | Sunday 05 April 2026 02:38:33 +0000 (0:00:04.429) 0:03:15.350 ********** 2026-04-05 02:38:34.770660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 02:38:34.770676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.770689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.770701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.770712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 02:38:34.770792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.770816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:34.864484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:34.864603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 02:38:34.864621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.864636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.864701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:34.864853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.864885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.864906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.864926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 02:38:34.864946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 02:38:34.864994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:34.865020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.958567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.958678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:34.958698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 02:38:34.958775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:34.958793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:34.958822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.958836 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:34.958873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 02:38:34.958886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:34.958898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.958921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.958933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:34.958955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 02:38:35.198685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:35.198835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:35.198912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-05 02:38:35.198962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:35.198985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:35.199018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 02:38:35.199031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:35.199042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:35.199063 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:35.199076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:35.199087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:35.199103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:35.199113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:35.199130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-05 02:38:45.692593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-05 02:38:45.692802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 02:38:45.692872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 02:38:45.692915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 02:38:45.692937 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:45.692958 | orchestrator | 2026-04-05 02:38:45.692979 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-05 02:38:45.692999 | orchestrator | Sunday 05 April 2026 02:38:35 +0000 (0:00:01.504) 0:03:16.855 ********** 2026-04-05 02:38:45.693019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-05 02:38:45.693040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-05 02:38:45.693059 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:45.693078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-05 02:38:45.693097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-05 02:38:45.693116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:45.693160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-05 02:38:45.693181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-05 02:38:45.693217 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:45.693236 | orchestrator | 2026-04-05 02:38:45.693255 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-05 02:38:45.693274 | orchestrator | Sunday 05 April 2026 02:38:37 +0000 (0:00:02.079) 0:03:18.934 ********** 2026-04-05 02:38:45.693294 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:38:45.693314 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:38:45.693334 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:38:45.693354 | orchestrator | 2026-04-05 02:38:45.693375 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-05 02:38:45.693393 | orchestrator | Sunday 05 April 2026 02:38:38 +0000 (0:00:01.367) 0:03:20.302 ********** 2026-04-05 02:38:45.693412 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:38:45.693431 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:38:45.693449 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:38:45.693469 | orchestrator | 2026-04-05 02:38:45.693487 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-05 02:38:45.693504 | orchestrator | Sunday 05 April 2026 02:38:40 +0000 (0:00:02.366) 0:03:22.669 ********** 2026-04-05 02:38:45.693522 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:38:45.693541 | orchestrator | 2026-04-05 02:38:45.693560 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-05 02:38:45.693580 | orchestrator | Sunday 05 April 2026 02:38:42 +0000 (0:00:01.205) 0:03:23.874 ********** 2026-04-05 02:38:45.693601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 02:38:45.693633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 02:38:45.693654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 02:38:45.693687 | orchestrator | 2026-04-05 02:38:45.693707 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-05 02:38:45.693807 | orchestrator | Sunday 05 April 2026 02:38:45 +0000 (0:00:03.467) 0:03:27.342 ********** 2026-04-05 02:38:56.793804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 02:38:56.793938 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:56.793958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 02:38:56.793970 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:56.793997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 02:38:56.794008 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:56.794071 | orchestrator | 2026-04-05 02:38:56.794084 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-05 02:38:56.794095 | orchestrator | Sunday 05 April 2026 02:38:46 +0000 (0:00:00.538) 0:03:27.880 ********** 2026-04-05 02:38:56.794106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 02:38:56.794142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 02:38:56.794154 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:56.794164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 02:38:56.794174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 02:38:56.794184 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:38:56.794211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 02:38:56.794222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-05 02:38:56.794232 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:38:56.794241 | orchestrator | 2026-04-05 02:38:56.794251 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-05 02:38:56.794261 | orchestrator | Sunday 05 April 2026 02:38:47 +0000 (0:00:00.801) 0:03:28.682 ********** 2026-04-05 02:38:56.794270 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:38:56.794282 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:38:56.794295 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:38:56.794307 | orchestrator | 2026-04-05 02:38:56.794319 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-05 02:38:56.794330 | orchestrator | Sunday 05 April 2026 02:38:48 +0000 (0:00:01.953) 0:03:30.635 ********** 2026-04-05 02:38:56.794342 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:38:56.794353 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:38:56.794365 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:38:56.794376 | orchestrator | 2026-04-05 02:38:56.794387 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-05 02:38:56.794399 | orchestrator | Sunday 05 April 2026 02:38:50 +0000 (0:00:01.967) 0:03:32.602 ********** 2026-04-05 02:38:56.794410 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:38:56.794422 | orchestrator | 2026-04-05 02:38:56.794433 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-05 02:38:56.794445 | orchestrator | Sunday 05 April 2026 02:38:52 +0000 (0:00:01.569) 0:03:34.171 ********** 2026-04-05 02:38:56.794459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 02:38:56.794487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:38:56.794501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:38:56.794524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 02:38:58.027918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:38:58.028011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:38:58.028061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 02:38:58.028074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:38:58.028084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:38:58.028094 | orchestrator | 2026-04-05 02:38:58.028104 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-05 02:38:58.028115 | orchestrator | Sunday 05 April 2026 02:38:56 +0000 (0:00:04.277) 0:03:38.449 ********** 2026-04-05 02:38:58.028142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 02:38:58.028159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:38:58.028173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:38:58.028183 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:38:58.028194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 02:38:58.028210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:39:09.309213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:39:09.309331 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:09.309371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 02:39:09.309405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 02:39:09.309419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 02:39:09.309430 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:09.309441 | orchestrator | 2026-04-05 02:39:09.309454 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-05 02:39:09.309466 | orchestrator | Sunday 05 April 2026 02:38:58 +0000 (0:00:01.232) 0:03:39.681 ********** 2026-04-05 02:39:09.309479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309551 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:09.309563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309616 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:09.309627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-05 02:39:09.309677 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:09.309688 | orchestrator | 2026-04-05 02:39:09.309699 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-05 02:39:09.309711 | orchestrator | Sunday 05 April 2026 02:38:58 +0000 (0:00:00.941) 0:03:40.622 ********** 2026-04-05 02:39:09.309724 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:39:09.309737 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:39:09.309790 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:39:09.309803 | orchestrator | 2026-04-05 02:39:09.309816 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-05 02:39:09.309829 | orchestrator | Sunday 05 April 2026 02:39:00 +0000 (0:00:01.472) 0:03:42.094 ********** 2026-04-05 02:39:09.309842 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:39:09.309855 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:39:09.309868 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:39:09.309881 | orchestrator | 2026-04-05 02:39:09.309893 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-05 02:39:09.309906 | orchestrator | Sunday 05 April 2026 02:39:02 +0000 (0:00:02.251) 0:03:44.346 ********** 2026-04-05 02:39:09.309919 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:39:09.309932 | orchestrator | 2026-04-05 02:39:09.309945 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-05 02:39:09.309958 | orchestrator | Sunday 05 April 2026 02:39:04 +0000 (0:00:01.647) 0:03:45.994 ********** 2026-04-05 02:39:09.309971 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-05 02:39:09.309986 | orchestrator | 2026-04-05 02:39:09.309999 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-05 02:39:09.310012 | orchestrator | Sunday 05 April 2026 02:39:05 +0000 (0:00:00.933) 0:03:46.928 ********** 2026-04-05 02:39:09.310086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 02:39:09.310119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 02:39:21.421544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 02:39:21.421656 | orchestrator | 2026-04-05 02:39:21.421673 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-05 02:39:21.421686 | orchestrator | Sunday 05 April 2026 02:39:09 +0000 (0:00:04.038) 0:03:50.966 ********** 2026-04-05 02:39:21.421699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.421711 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:21.421741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.421855 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:21.421876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.421894 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:21.421912 | orchestrator | 2026-04-05 02:39:21.421931 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-05 02:39:21.421953 | orchestrator | Sunday 05 April 2026 02:39:10 +0000 (0:00:01.475) 0:03:52.442 ********** 2026-04-05 02:39:21.421973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 02:39:21.421996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 02:39:21.422111 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:21.422129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 02:39:21.422142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 02:39:21.422156 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:21.422169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 02:39:21.422182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 02:39:21.422214 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:21.422228 | orchestrator | 2026-04-05 02:39:21.422241 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 02:39:21.422253 | orchestrator | Sunday 05 April 2026 02:39:12 +0000 (0:00:01.567) 0:03:54.010 ********** 2026-04-05 02:39:21.422266 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:39:21.422279 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:39:21.422292 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:39:21.422306 | orchestrator | 2026-04-05 02:39:21.422317 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 02:39:21.422328 | orchestrator | Sunday 05 April 2026 02:39:14 +0000 (0:00:02.537) 0:03:56.547 ********** 2026-04-05 02:39:21.422339 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:39:21.422349 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:39:21.422360 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:39:21.422370 | orchestrator | 2026-04-05 02:39:21.422381 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-05 02:39:21.422392 | orchestrator | Sunday 05 April 2026 02:39:17 +0000 (0:00:02.984) 0:03:59.532 ********** 2026-04-05 02:39:21.422404 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-05 02:39:21.422416 | orchestrator | 2026-04-05 02:39:21.422426 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-05 02:39:21.422437 | orchestrator | Sunday 05 April 2026 02:39:19 +0000 (0:00:01.190) 0:04:00.723 ********** 2026-04-05 02:39:21.422458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.422470 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:21.422482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.422502 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:21.422514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.422525 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:21.422536 | orchestrator | 2026-04-05 02:39:21.422546 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-05 02:39:21.422557 | orchestrator | Sunday 05 April 2026 02:39:20 +0000 (0:00:01.069) 0:04:01.792 ********** 2026-04-05 02:39:21.422568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.422579 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:21.422590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:21.422609 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:45.327590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 02:39:45.327707 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:45.327726 | orchestrator | 2026-04-05 02:39:45.327740 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-05 02:39:45.327752 | orchestrator | Sunday 05 April 2026 02:39:21 +0000 (0:00:01.282) 0:04:03.074 ********** 2026-04-05 02:39:45.328060 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:45.328072 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:45.328084 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:45.328098 | orchestrator | 2026-04-05 02:39:45.328110 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 02:39:45.328124 | orchestrator | Sunday 05 April 2026 02:39:22 +0000 (0:00:01.594) 0:04:04.669 ********** 2026-04-05 02:39:45.328137 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:39:45.328150 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:39:45.328163 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:39:45.328176 | orchestrator | 2026-04-05 02:39:45.328189 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 02:39:45.328202 | orchestrator | Sunday 05 April 2026 02:39:25 +0000 (0:00:02.767) 0:04:07.437 ********** 2026-04-05 02:39:45.328243 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:39:45.328256 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:39:45.328269 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:39:45.328282 | orchestrator | 2026-04-05 02:39:45.328311 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-05 02:39:45.328324 | orchestrator | Sunday 05 April 2026 02:39:28 +0000 (0:00:02.754) 0:04:10.191 ********** 2026-04-05 02:39:45.328337 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-05 02:39:45.328351 | orchestrator | 2026-04-05 02:39:45.328364 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-05 02:39:45.328377 | orchestrator | Sunday 05 April 2026 02:39:29 +0000 (0:00:01.237) 0:04:11.429 ********** 2026-04-05 02:39:45.328392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 02:39:45.328405 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:45.328419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 02:39:45.328430 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:45.328441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 02:39:45.328453 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:45.328463 | orchestrator | 2026-04-05 02:39:45.328475 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-05 02:39:45.328486 | orchestrator | Sunday 05 April 2026 02:39:31 +0000 (0:00:01.371) 0:04:12.800 ********** 2026-04-05 02:39:45.328518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 02:39:45.328530 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:45.328541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 02:39:45.328562 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:45.328573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 02:39:45.328584 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:45.328595 | orchestrator | 2026-04-05 02:39:45.328612 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-05 02:39:45.328623 | orchestrator | Sunday 05 April 2026 02:39:32 +0000 (0:00:01.415) 0:04:14.215 ********** 2026-04-05 02:39:45.328634 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:45.328645 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:45.328656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:45.328667 | orchestrator | 2026-04-05 02:39:45.328677 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 02:39:45.328689 | orchestrator | Sunday 05 April 2026 02:39:34 +0000 (0:00:01.861) 0:04:16.076 ********** 2026-04-05 02:39:45.328699 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:39:45.328710 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:39:45.328721 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:39:45.328732 | orchestrator | 2026-04-05 02:39:45.328742 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 02:39:45.328789 | orchestrator | Sunday 05 April 2026 02:39:36 +0000 (0:00:02.455) 0:04:18.532 ********** 2026-04-05 02:39:45.328801 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:39:45.328812 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:39:45.328823 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:39:45.328834 | orchestrator | 2026-04-05 02:39:45.328844 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-05 02:39:45.328855 | orchestrator | Sunday 05 April 2026 02:39:40 +0000 (0:00:03.336) 0:04:21.868 ********** 2026-04-05 02:39:45.328866 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:39:45.328877 | orchestrator | 2026-04-05 02:39:45.328888 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-05 02:39:45.328899 | orchestrator | Sunday 05 April 2026 02:39:41 +0000 (0:00:01.710) 0:04:23.579 ********** 2026-04-05 02:39:45.328912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 02:39:45.328924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 02:39:45.328952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.046814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.046924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:39:46.046937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 02:39:46.046947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 02:39:46.046957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.046985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.047010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 02:39:46.047018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:39:46.047026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 02:39:46.047034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.047041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.047081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:39:46.047089 | orchestrator | 2026-04-05 02:39:46.047098 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-05 02:39:46.047106 | orchestrator | Sunday 05 April 2026 02:39:45 +0000 (0:00:03.538) 0:04:27.117 ********** 2026-04-05 02:39:46.047119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 02:39:46.193876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 02:39:46.193968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.193982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.193993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:39:46.194088 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:46.194101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 02:39:46.194112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 02:39:46.194144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.194155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.194164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:39:46.194180 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:46.194189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 02:39:46.194198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 02:39:46.194207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 02:39:46.194228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 02:39:58.472835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 02:39:58.472960 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:58.472975 | orchestrator | 2026-04-05 02:39:58.472983 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-05 02:39:58.472991 | orchestrator | Sunday 05 April 2026 02:39:46 +0000 (0:00:00.736) 0:04:27.853 ********** 2026-04-05 02:39:58.473000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 02:39:58.473092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 02:39:58.473103 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:39:58.473110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 02:39:58.473117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 02:39:58.473123 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:39:58.473129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 02:39:58.473135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 02:39:58.473141 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:39:58.473147 | orchestrator | 2026-04-05 02:39:58.473154 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-05 02:39:58.473160 | orchestrator | Sunday 05 April 2026 02:39:47 +0000 (0:00:00.928) 0:04:28.782 ********** 2026-04-05 02:39:58.473164 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:39:58.473168 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:39:58.473172 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:39:58.473175 | orchestrator | 2026-04-05 02:39:58.473179 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-05 02:39:58.473183 | orchestrator | Sunday 05 April 2026 02:39:48 +0000 (0:00:01.848) 0:04:30.630 ********** 2026-04-05 02:39:58.473187 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:39:58.473191 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:39:58.473195 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:39:58.473199 | orchestrator | 2026-04-05 02:39:58.473203 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-05 02:39:58.473206 | orchestrator | Sunday 05 April 2026 02:39:51 +0000 (0:00:02.289) 0:04:32.920 ********** 2026-04-05 02:39:58.473210 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:39:58.473215 | orchestrator | 2026-04-05 02:39:58.473219 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-05 02:39:58.473223 | orchestrator | Sunday 05 April 2026 02:39:52 +0000 (0:00:01.435) 0:04:34.355 ********** 2026-04-05 02:39:58.473239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:39:58.473262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:39:58.473273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:39:58.473278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:39:58.473286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:39:58.473296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:40:00.725347 | orchestrator | 2026-04-05 02:40:00.725434 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-05 02:40:00.725447 | orchestrator | Sunday 05 April 2026 02:39:58 +0000 (0:00:05.767) 0:04:40.123 ********** 2026-04-05 02:40:00.725459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:40:00.725472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:40:00.725483 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:00.725508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:40:00.725519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:40:00.725560 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:00.725571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:40:00.725580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:40:00.725602 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:00.725629 | orchestrator | 2026-04-05 02:40:00.725647 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-05 02:40:00.725657 | orchestrator | Sunday 05 April 2026 02:39:59 +0000 (0:00:01.230) 0:04:41.353 ********** 2026-04-05 02:40:00.725667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-05 02:40:00.725678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 02:40:00.725690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 02:40:00.725709 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:00.725723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-05 02:40:00.725732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 02:40:00.725741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 02:40:00.725750 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:00.725782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-05 02:40:00.725792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 02:40:00.725811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-05 02:40:07.316124 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:07.316222 | orchestrator | 2026-04-05 02:40:07.316239 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-05 02:40:07.316249 | orchestrator | Sunday 05 April 2026 02:40:00 +0000 (0:00:01.025) 0:04:42.378 ********** 2026-04-05 02:40:07.316258 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:07.316266 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:07.316274 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:07.316282 | orchestrator | 2026-04-05 02:40:07.316290 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-05 02:40:07.316299 | orchestrator | Sunday 05 April 2026 02:40:01 +0000 (0:00:00.479) 0:04:42.858 ********** 2026-04-05 02:40:07.316307 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:07.316315 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:07.316323 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:07.316331 | orchestrator | 2026-04-05 02:40:07.316339 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-05 02:40:07.316347 | orchestrator | Sunday 05 April 2026 02:40:02 +0000 (0:00:01.584) 0:04:44.442 ********** 2026-04-05 02:40:07.316357 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:40:07.316372 | orchestrator | 2026-04-05 02:40:07.316386 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-05 02:40:07.316400 | orchestrator | Sunday 05 April 2026 02:40:04 +0000 (0:00:01.912) 0:04:46.355 ********** 2026-04-05 02:40:07.316418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 02:40:07.316458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 02:40:07.316487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:07.316502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:07.316511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 02:40:07.316535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 02:40:07.316544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 02:40:07.316552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 02:40:07.316567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:07.316579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 02:40:07.316588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:07.316596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:07.316610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 02:40:08.916881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:08.916976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 02:40:08.917014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 02:40:08.917050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 02:40:08.917073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:08.917095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:08.917136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 02:40:08.917159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 02:40:08.917221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 02:40:08.917241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 02:40:08.917254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:08.917276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.587231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 02:40:09.587348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 02:40:09.587374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.587408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.587427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 02:40:09.587445 | orchestrator | 2026-04-05 02:40:09.587465 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-05 02:40:09.587483 | orchestrator | Sunday 05 April 2026 02:40:09 +0000 (0:00:04.366) 0:04:50.721 ********** 2026-04-05 02:40:09.587501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 02:40:09.587538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 02:40:09.587568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.587585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.587603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 02:40:09.587630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 02:40:09.587651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 02:40:09.587679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.680295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.680370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 02:40:09.680383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 02:40:09.680411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 02:40:09.680422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:09.680434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.680444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.680469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 02:40:09.680499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 02:40:09.680515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 02:40:09.680539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 02:40:09.680558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.680574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 02:40:09.680600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:09.680626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 02:40:11.089837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:11.089996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 02:40:11.090104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:11.090129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-05 02:40:11.090180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 02:40:11.090202 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:11.090226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:11.090277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 02:40:11.090321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 02:40:11.090342 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:11.090362 | orchestrator | 2026-04-05 02:40:11.090383 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-05 02:40:11.090403 | orchestrator | Sunday 05 April 2026 02:40:09 +0000 (0:00:00.749) 0:04:51.470 ********** 2026-04-05 02:40:11.090427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-05 02:40:11.090443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-05 02:40:11.090459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 02:40:11.090475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 02:40:11.090489 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:11.090500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-05 02:40:11.090522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-05 02:40:11.090534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 02:40:11.090545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 02:40:11.090555 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:11.090566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-05 02:40:11.090577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-05 02:40:11.090589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 02:40:11.090609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-05 02:40:18.843575 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:18.843686 | orchestrator | 2026-04-05 02:40:18.843706 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-05 02:40:18.843720 | orchestrator | Sunday 05 April 2026 02:40:11 +0000 (0:00:01.267) 0:04:52.738 ********** 2026-04-05 02:40:18.843732 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:18.843743 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:18.843755 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:18.843826 | orchestrator | 2026-04-05 02:40:18.843846 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-05 02:40:18.843864 | orchestrator | Sunday 05 April 2026 02:40:11 +0000 (0:00:00.456) 0:04:53.194 ********** 2026-04-05 02:40:18.843881 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:18.843898 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:18.843916 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:18.843932 | orchestrator | 2026-04-05 02:40:18.843948 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-05 02:40:18.843967 | orchestrator | Sunday 05 April 2026 02:40:12 +0000 (0:00:01.354) 0:04:54.549 ********** 2026-04-05 02:40:18.843986 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:40:18.844006 | orchestrator | 2026-04-05 02:40:18.844024 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-05 02:40:18.844043 | orchestrator | Sunday 05 April 2026 02:40:14 +0000 (0:00:01.801) 0:04:56.350 ********** 2026-04-05 02:40:18.844062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:40:18.844106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:40:18.844161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:40:18.844177 | orchestrator | 2026-04-05 02:40:18.844189 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-05 02:40:18.844221 | orchestrator | Sunday 05 April 2026 02:40:16 +0000 (0:00:02.262) 0:04:58.613 ********** 2026-04-05 02:40:18.844234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 02:40:18.844263 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:18.844276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 02:40:18.844288 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:18.844299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 02:40:18.844310 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:18.844322 | orchestrator | 2026-04-05 02:40:18.844333 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-05 02:40:18.844347 | orchestrator | Sunday 05 April 2026 02:40:17 +0000 (0:00:00.432) 0:04:59.046 ********** 2026-04-05 02:40:18.844367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 02:40:18.844387 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:18.844405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 02:40:18.844423 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:18.844441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 02:40:18.844461 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:18.844479 | orchestrator | 2026-04-05 02:40:18.844498 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-05 02:40:18.844510 | orchestrator | Sunday 05 April 2026 02:40:18 +0000 (0:00:00.965) 0:05:00.011 ********** 2026-04-05 02:40:18.844531 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:29.070861 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:29.071030 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:29.071046 | orchestrator | 2026-04-05 02:40:29.071059 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-05 02:40:29.071072 | orchestrator | Sunday 05 April 2026 02:40:18 +0000 (0:00:00.494) 0:05:00.506 ********** 2026-04-05 02:40:29.071082 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:29.071123 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:29.071134 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:29.071144 | orchestrator | 2026-04-05 02:40:29.071153 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-05 02:40:29.071163 | orchestrator | Sunday 05 April 2026 02:40:20 +0000 (0:00:01.411) 0:05:01.917 ********** 2026-04-05 02:40:29.071173 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:40:29.071184 | orchestrator | 2026-04-05 02:40:29.071194 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-05 02:40:29.071204 | orchestrator | Sunday 05 April 2026 02:40:21 +0000 (0:00:01.490) 0:05:03.407 ********** 2026-04-05 02:40:29.071237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 02:40:29.071254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 02:40:29.071265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 02:40:29.071298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 02:40:29.071327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 02:40:29.071340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 02:40:29.071352 | orchestrator | 2026-04-05 02:40:29.071364 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-05 02:40:29.071377 | orchestrator | Sunday 05 April 2026 02:40:28 +0000 (0:00:06.652) 0:05:10.059 ********** 2026-04-05 02:40:29.071389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 02:40:29.071410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 02:40:35.204924 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:35.205124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 02:40:35.205156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 02:40:35.205174 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:35.205189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 02:40:35.205205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 02:40:35.205248 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:35.205263 | orchestrator | 2026-04-05 02:40:35.205279 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-05 02:40:35.205294 | orchestrator | Sunday 05 April 2026 02:40:29 +0000 (0:00:00.673) 0:05:10.733 ********** 2026-04-05 02:40:35.205341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205410 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:35.205419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205458 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:35.205467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-05 02:40:35.205517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:35.205535 | orchestrator | 2026-04-05 02:40:35.205559 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-05 02:40:35.205573 | orchestrator | Sunday 05 April 2026 02:40:30 +0000 (0:00:01.012) 0:05:11.746 ********** 2026-04-05 02:40:35.205585 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:40:35.205598 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:40:35.205611 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:40:35.205624 | orchestrator | 2026-04-05 02:40:35.205637 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-05 02:40:35.205650 | orchestrator | Sunday 05 April 2026 02:40:31 +0000 (0:00:01.370) 0:05:13.116 ********** 2026-04-05 02:40:35.205663 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:40:35.205677 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:40:35.205690 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:40:35.205701 | orchestrator | 2026-04-05 02:40:35.205710 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-05 02:40:35.205718 | orchestrator | Sunday 05 April 2026 02:40:33 +0000 (0:00:02.297) 0:05:15.414 ********** 2026-04-05 02:40:35.205725 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:35.205733 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:35.205741 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:35.205749 | orchestrator | 2026-04-05 02:40:35.205757 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-05 02:40:35.205765 | orchestrator | Sunday 05 April 2026 02:40:34 +0000 (0:00:00.697) 0:05:16.111 ********** 2026-04-05 02:40:35.205804 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:35.205814 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:40:35.205822 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:40:35.205829 | orchestrator | 2026-04-05 02:40:35.205837 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-05 02:40:35.205845 | orchestrator | Sunday 05 April 2026 02:40:34 +0000 (0:00:00.375) 0:05:16.487 ********** 2026-04-05 02:40:35.205854 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:40:35.205870 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.782949 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.783087 | orchestrator | 2026-04-05 02:41:19.783106 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-05 02:41:19.783120 | orchestrator | Sunday 05 April 2026 02:40:35 +0000 (0:00:00.380) 0:05:16.867 ********** 2026-04-05 02:41:19.783132 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.783143 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.783154 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.783165 | orchestrator | 2026-04-05 02:41:19.783176 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-05 02:41:19.783187 | orchestrator | Sunday 05 April 2026 02:40:35 +0000 (0:00:00.364) 0:05:17.232 ********** 2026-04-05 02:41:19.783199 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.783210 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.783221 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.783232 | orchestrator | 2026-04-05 02:41:19.783243 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-05 02:41:19.783271 | orchestrator | Sunday 05 April 2026 02:40:36 +0000 (0:00:00.710) 0:05:17.943 ********** 2026-04-05 02:41:19.783284 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.783295 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.783307 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.783317 | orchestrator | 2026-04-05 02:41:19.783328 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-05 02:41:19.783339 | orchestrator | Sunday 05 April 2026 02:40:36 +0000 (0:00:00.597) 0:05:18.540 ********** 2026-04-05 02:41:19.783351 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.783363 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.783373 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.783384 | orchestrator | 2026-04-05 02:41:19.783395 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-05 02:41:19.783429 | orchestrator | Sunday 05 April 2026 02:40:37 +0000 (0:00:00.713) 0:05:19.254 ********** 2026-04-05 02:41:19.783445 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.783457 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.783469 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.783481 | orchestrator | 2026-04-05 02:41:19.783493 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-05 02:41:19.783511 | orchestrator | Sunday 05 April 2026 02:40:38 +0000 (0:00:00.722) 0:05:19.977 ********** 2026-04-05 02:41:19.783531 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.783550 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.783567 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.783585 | orchestrator | 2026-04-05 02:41:19.783603 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-05 02:41:19.783620 | orchestrator | Sunday 05 April 2026 02:40:39 +0000 (0:00:00.941) 0:05:20.918 ********** 2026-04-05 02:41:19.783638 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.783656 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.783672 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.783690 | orchestrator | 2026-04-05 02:41:19.783708 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-05 02:41:19.783727 | orchestrator | Sunday 05 April 2026 02:40:40 +0000 (0:00:00.882) 0:05:21.801 ********** 2026-04-05 02:41:19.783744 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.783763 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.783837 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.783860 | orchestrator | 2026-04-05 02:41:19.783880 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-05 02:41:19.783897 | orchestrator | Sunday 05 April 2026 02:40:41 +0000 (0:00:00.926) 0:05:22.727 ********** 2026-04-05 02:41:19.783916 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:41:19.783930 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:41:19.783941 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:41:19.783952 | orchestrator | 2026-04-05 02:41:19.783963 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-05 02:41:19.783974 | orchestrator | Sunday 05 April 2026 02:40:45 +0000 (0:00:04.776) 0:05:27.504 ********** 2026-04-05 02:41:19.783985 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.783995 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.784006 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.784016 | orchestrator | 2026-04-05 02:41:19.784027 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-05 02:41:19.784038 | orchestrator | Sunday 05 April 2026 02:40:49 +0000 (0:00:03.231) 0:05:30.736 ********** 2026-04-05 02:41:19.784049 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:41:19.784059 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:41:19.784070 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:41:19.784081 | orchestrator | 2026-04-05 02:41:19.784092 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-05 02:41:19.784103 | orchestrator | Sunday 05 April 2026 02:40:59 +0000 (0:00:10.634) 0:05:41.370 ********** 2026-04-05 02:41:19.784114 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.784124 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.784135 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.784146 | orchestrator | 2026-04-05 02:41:19.784156 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-05 02:41:19.784167 | orchestrator | Sunday 05 April 2026 02:41:04 +0000 (0:00:04.870) 0:05:46.241 ********** 2026-04-05 02:41:19.784178 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:41:19.784189 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:41:19.784199 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:41:19.784210 | orchestrator | 2026-04-05 02:41:19.784221 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-05 02:41:19.784232 | orchestrator | Sunday 05 April 2026 02:41:14 +0000 (0:00:09.461) 0:05:55.703 ********** 2026-04-05 02:41:19.784258 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.784269 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.784280 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.784291 | orchestrator | 2026-04-05 02:41:19.784301 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-05 02:41:19.784312 | orchestrator | Sunday 05 April 2026 02:41:14 +0000 (0:00:00.724) 0:05:56.427 ********** 2026-04-05 02:41:19.784323 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.784334 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.784345 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.784356 | orchestrator | 2026-04-05 02:41:19.784389 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-05 02:41:19.784400 | orchestrator | Sunday 05 April 2026 02:41:15 +0000 (0:00:00.386) 0:05:56.814 ********** 2026-04-05 02:41:19.784411 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.784422 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.784433 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.784443 | orchestrator | 2026-04-05 02:41:19.784454 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-05 02:41:19.784465 | orchestrator | Sunday 05 April 2026 02:41:15 +0000 (0:00:00.384) 0:05:57.198 ********** 2026-04-05 02:41:19.784476 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.784487 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.784497 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.784508 | orchestrator | 2026-04-05 02:41:19.784519 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-05 02:41:19.784530 | orchestrator | Sunday 05 April 2026 02:41:15 +0000 (0:00:00.387) 0:05:57.586 ********** 2026-04-05 02:41:19.784540 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.784560 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.784572 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.784582 | orchestrator | 2026-04-05 02:41:19.784593 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-05 02:41:19.784604 | orchestrator | Sunday 05 April 2026 02:41:16 +0000 (0:00:00.743) 0:05:58.329 ********** 2026-04-05 02:41:19.784615 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:19.784626 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:19.784636 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:19.784647 | orchestrator | 2026-04-05 02:41:19.784658 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-05 02:41:19.784668 | orchestrator | Sunday 05 April 2026 02:41:17 +0000 (0:00:00.394) 0:05:58.724 ********** 2026-04-05 02:41:19.784679 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.784690 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.784700 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.784711 | orchestrator | 2026-04-05 02:41:19.784722 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-05 02:41:19.784733 | orchestrator | Sunday 05 April 2026 02:41:17 +0000 (0:00:00.944) 0:05:59.669 ********** 2026-04-05 02:41:19.784743 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:19.784754 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:19.784765 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:19.784775 | orchestrator | 2026-04-05 02:41:19.784809 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:41:19.784822 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-05 02:41:19.784834 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-05 02:41:19.784845 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-05 02:41:19.784856 | orchestrator | 2026-04-05 02:41:19.784874 | orchestrator | 2026-04-05 02:41:19.784885 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:41:19.784896 | orchestrator | Sunday 05 April 2026 02:41:18 +0000 (0:00:00.872) 0:06:00.541 ********** 2026-04-05 02:41:19.784907 | orchestrator | =============================================================================== 2026-04-05 02:41:19.784917 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.63s 2026-04-05 02:41:19.784928 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.46s 2026-04-05 02:41:19.784939 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.65s 2026-04-05 02:41:19.784950 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.77s 2026-04-05 02:41:19.784960 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.87s 2026-04-05 02:41:19.784971 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.78s 2026-04-05 02:41:19.784982 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.43s 2026-04-05 02:41:19.784993 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.37s 2026-04-05 02:41:19.785004 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.28s 2026-04-05 02:41:19.785014 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.04s 2026-04-05 02:41:19.785025 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.03s 2026-04-05 02:41:19.785036 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.61s 2026-04-05 02:41:19.785047 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.56s 2026-04-05 02:41:19.785057 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.55s 2026-04-05 02:41:19.785068 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.54s 2026-04-05 02:41:19.785079 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.47s 2026-04-05 02:41:19.785090 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.47s 2026-04-05 02:41:19.785100 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.40s 2026-04-05 02:41:19.785111 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.39s 2026-04-05 02:41:19.785122 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.35s 2026-04-05 02:41:22.527697 | orchestrator | 2026-04-05 02:41:22 | INFO  | Task 36610aaf-a1ca-4341-bf60-5ee63e88a455 (opensearch) was prepared for execution. 2026-04-05 02:41:22.527838 | orchestrator | 2026-04-05 02:41:22 | INFO  | It takes a moment until task 36610aaf-a1ca-4341-bf60-5ee63e88a455 (opensearch) has been started and output is visible here. 2026-04-05 02:41:33.715261 | orchestrator | 2026-04-05 02:41:33.715384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:41:33.715399 | orchestrator | 2026-04-05 02:41:33.715409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:41:33.715424 | orchestrator | Sunday 05 April 2026 02:41:27 +0000 (0:00:00.269) 0:00:00.269 ********** 2026-04-05 02:41:33.715437 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:41:33.715451 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:41:33.715464 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:41:33.715477 | orchestrator | 2026-04-05 02:41:33.715490 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:41:33.715503 | orchestrator | Sunday 05 April 2026 02:41:27 +0000 (0:00:00.286) 0:00:00.555 ********** 2026-04-05 02:41:33.715536 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-05 02:41:33.715551 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-05 02:41:33.715565 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-05 02:41:33.715579 | orchestrator | 2026-04-05 02:41:33.715592 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-05 02:41:33.715632 | orchestrator | 2026-04-05 02:41:33.715647 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 02:41:33.715660 | orchestrator | Sunday 05 April 2026 02:41:27 +0000 (0:00:00.464) 0:00:01.019 ********** 2026-04-05 02:41:33.715674 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:41:33.715688 | orchestrator | 2026-04-05 02:41:33.715702 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-05 02:41:33.715715 | orchestrator | Sunday 05 April 2026 02:41:28 +0000 (0:00:00.515) 0:00:01.535 ********** 2026-04-05 02:41:33.715728 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 02:41:33.715742 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 02:41:33.715757 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 02:41:33.715766 | orchestrator | 2026-04-05 02:41:33.715775 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-05 02:41:33.715836 | orchestrator | Sunday 05 April 2026 02:41:28 +0000 (0:00:00.667) 0:00:02.203 ********** 2026-04-05 02:41:33.715852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:33.715867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:33.715895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:33.715914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:33.715935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:33.715945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:33.715955 | orchestrator | 2026-04-05 02:41:33.715964 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 02:41:33.715974 | orchestrator | Sunday 05 April 2026 02:41:30 +0000 (0:00:01.744) 0:00:03.947 ********** 2026-04-05 02:41:33.715983 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:41:33.715992 | orchestrator | 2026-04-05 02:41:33.716002 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-05 02:41:33.716012 | orchestrator | Sunday 05 April 2026 02:41:31 +0000 (0:00:00.623) 0:00:04.571 ********** 2026-04-05 02:41:33.716032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:34.562589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:34.562676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:34.562688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:34.562696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:34.562752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:34.562761 | orchestrator | 2026-04-05 02:41:34.562769 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-05 02:41:34.562776 | orchestrator | Sunday 05 April 2026 02:41:33 +0000 (0:00:02.393) 0:00:06.964 ********** 2026-04-05 02:41:34.562784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:41:34.562838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:41:34.562845 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:34.562851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:41:34.562877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:41:35.629588 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:35.629675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:41:35.629685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:41:35.629691 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:35.629696 | orchestrator | 2026-04-05 02:41:35.629702 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-05 02:41:35.629708 | orchestrator | Sunday 05 April 2026 02:41:34 +0000 (0:00:00.845) 0:00:07.810 ********** 2026-04-05 02:41:35.629732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:41:35.629748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:41:35.629762 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:41:35.629768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:41:35.629773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:41:35.629778 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:41:35.629826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-05 02:41:35.629838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-05 02:41:35.629843 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:41:35.629848 | orchestrator | 2026-04-05 02:41:35.629853 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-05 02:41:35.629861 | orchestrator | Sunday 05 April 2026 02:41:35 +0000 (0:00:01.064) 0:00:08.874 ********** 2026-04-05 02:41:44.007890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:44.008023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:44.008045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:44.008114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:44.008161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:44.008181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:41:44.008210 | orchestrator | 2026-04-05 02:41:44.008229 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-05 02:41:44.008247 | orchestrator | Sunday 05 April 2026 02:41:38 +0000 (0:00:02.452) 0:00:11.327 ********** 2026-04-05 02:41:44.008263 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:41:44.008281 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:41:44.008297 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:41:44.008312 | orchestrator | 2026-04-05 02:41:44.008328 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-05 02:41:44.008344 | orchestrator | Sunday 05 April 2026 02:41:40 +0000 (0:00:02.343) 0:00:13.671 ********** 2026-04-05 02:41:44.008359 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:41:44.008376 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:41:44.008393 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:41:44.008409 | orchestrator | 2026-04-05 02:41:44.008427 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-05 02:41:44.008443 | orchestrator | Sunday 05 April 2026 02:41:42 +0000 (0:00:01.872) 0:00:15.543 ********** 2026-04-05 02:41:44.008461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:44.008488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:41:44.008518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-05 02:44:34.706108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:44:34.706226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:44:34.706250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-05 02:44:34.706259 | orchestrator | 2026-04-05 02:44:34.706267 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 02:44:34.706275 | orchestrator | Sunday 05 April 2026 02:41:43 +0000 (0:00:01.715) 0:00:17.258 ********** 2026-04-05 02:44:34.706283 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:44:34.706291 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:44:34.706297 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:44:34.706304 | orchestrator | 2026-04-05 02:44:34.706311 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 02:44:34.706318 | orchestrator | Sunday 05 April 2026 02:41:44 +0000 (0:00:00.312) 0:00:17.571 ********** 2026-04-05 02:44:34.706325 | orchestrator | 2026-04-05 02:44:34.706332 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 02:44:34.706338 | orchestrator | Sunday 05 April 2026 02:41:44 +0000 (0:00:00.078) 0:00:17.650 ********** 2026-04-05 02:44:34.706345 | orchestrator | 2026-04-05 02:44:34.706351 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 02:44:34.706363 | orchestrator | Sunday 05 April 2026 02:41:44 +0000 (0:00:00.072) 0:00:17.722 ********** 2026-04-05 02:44:34.706370 | orchestrator | 2026-04-05 02:44:34.706376 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-05 02:44:34.706398 | orchestrator | Sunday 05 April 2026 02:41:44 +0000 (0:00:00.066) 0:00:17.789 ********** 2026-04-05 02:44:34.706405 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:44:34.706412 | orchestrator | 2026-04-05 02:44:34.706418 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-05 02:44:34.706425 | orchestrator | Sunday 05 April 2026 02:41:44 +0000 (0:00:00.216) 0:00:18.005 ********** 2026-04-05 02:44:34.706431 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:44:34.706438 | orchestrator | 2026-04-05 02:44:34.706449 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-05 02:44:34.706460 | orchestrator | Sunday 05 April 2026 02:41:45 +0000 (0:00:00.645) 0:00:18.651 ********** 2026-04-05 02:44:34.706471 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:44:34.706482 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:44:34.706493 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:44:34.706505 | orchestrator | 2026-04-05 02:44:34.706516 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-05 02:44:34.706526 | orchestrator | Sunday 05 April 2026 02:42:53 +0000 (0:01:08.088) 0:01:26.739 ********** 2026-04-05 02:44:34.706537 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:44:34.706548 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:44:34.706559 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:44:34.706571 | orchestrator | 2026-04-05 02:44:34.706581 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 02:44:34.706588 | orchestrator | Sunday 05 April 2026 02:44:23 +0000 (0:01:30.060) 0:02:56.800 ********** 2026-04-05 02:44:34.706596 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:44:34.706602 | orchestrator | 2026-04-05 02:44:34.706609 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-05 02:44:34.706618 | orchestrator | Sunday 05 April 2026 02:44:24 +0000 (0:00:00.502) 0:02:57.303 ********** 2026-04-05 02:44:34.706625 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:44:34.706633 | orchestrator | 2026-04-05 02:44:34.706642 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-05 02:44:34.706649 | orchestrator | Sunday 05 April 2026 02:44:26 +0000 (0:00:02.842) 0:03:00.145 ********** 2026-04-05 02:44:34.706657 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:44:34.706665 | orchestrator | 2026-04-05 02:44:34.706673 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-05 02:44:34.706680 | orchestrator | Sunday 05 April 2026 02:44:29 +0000 (0:00:02.325) 0:03:02.471 ********** 2026-04-05 02:44:34.706688 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:44:34.706696 | orchestrator | 2026-04-05 02:44:34.706704 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-05 02:44:34.706712 | orchestrator | Sunday 05 April 2026 02:44:32 +0000 (0:00:02.847) 0:03:05.318 ********** 2026-04-05 02:44:34.706720 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:44:34.706728 | orchestrator | 2026-04-05 02:44:34.706736 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:44:34.706744 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 02:44:34.706752 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 02:44:34.706765 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 02:44:34.706772 | orchestrator | 2026-04-05 02:44:34.706778 | orchestrator | 2026-04-05 02:44:34.706790 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:44:34.706797 | orchestrator | Sunday 05 April 2026 02:44:34 +0000 (0:00:02.621) 0:03:07.940 ********** 2026-04-05 02:44:34.706804 | orchestrator | =============================================================================== 2026-04-05 02:44:34.706811 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 90.06s 2026-04-05 02:44:34.706817 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.09s 2026-04-05 02:44:34.706824 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.85s 2026-04-05 02:44:34.706830 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.84s 2026-04-05 02:44:34.706837 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.62s 2026-04-05 02:44:34.706867 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2026-04-05 02:44:34.706874 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.39s 2026-04-05 02:44:34.706881 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.34s 2026-04-05 02:44:34.706888 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.33s 2026-04-05 02:44:34.706894 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.87s 2026-04-05 02:44:34.706901 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.74s 2026-04-05 02:44:34.706908 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.72s 2026-04-05 02:44:34.706914 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2026-04-05 02:44:34.706921 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.85s 2026-04-05 02:44:34.706928 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-04-05 02:44:34.706934 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.65s 2026-04-05 02:44:34.706946 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-04-05 02:44:35.097332 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-04-05 02:44:35.097420 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-05 02:44:35.097431 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-04-05 02:44:37.623696 | orchestrator | 2026-04-05 02:44:37 | INFO  | Task 45005f07-4b64-48e8-aff3-0da79c7f30c5 (memcached) was prepared for execution. 2026-04-05 02:44:37.623787 | orchestrator | 2026-04-05 02:44:37 | INFO  | It takes a moment until task 45005f07-4b64-48e8-aff3-0da79c7f30c5 (memcached) has been started and output is visible here. 2026-04-05 02:44:54.541480 | orchestrator | 2026-04-05 02:44:54.542602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:44:54.542671 | orchestrator | 2026-04-05 02:44:54.542684 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:44:54.542695 | orchestrator | Sunday 05 April 2026 02:44:41 +0000 (0:00:00.278) 0:00:00.278 ********** 2026-04-05 02:44:54.542704 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:44:54.542714 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:44:54.542722 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:44:54.542731 | orchestrator | 2026-04-05 02:44:54.542740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:44:54.542749 | orchestrator | Sunday 05 April 2026 02:44:42 +0000 (0:00:00.324) 0:00:00.603 ********** 2026-04-05 02:44:54.542759 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-05 02:44:54.542768 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-05 02:44:54.542777 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-05 02:44:54.542786 | orchestrator | 2026-04-05 02:44:54.542795 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-05 02:44:54.542828 | orchestrator | 2026-04-05 02:44:54.542838 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-05 02:44:54.542881 | orchestrator | Sunday 05 April 2026 02:44:42 +0000 (0:00:00.506) 0:00:01.109 ********** 2026-04-05 02:44:54.542891 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:44:54.542900 | orchestrator | 2026-04-05 02:44:54.542909 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-05 02:44:54.542918 | orchestrator | Sunday 05 April 2026 02:44:43 +0000 (0:00:00.533) 0:00:01.642 ********** 2026-04-05 02:44:54.542926 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 02:44:54.542935 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 02:44:54.542944 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 02:44:54.542953 | orchestrator | 2026-04-05 02:44:54.542961 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-05 02:44:54.542970 | orchestrator | Sunday 05 April 2026 02:44:44 +0000 (0:00:00.836) 0:00:02.479 ********** 2026-04-05 02:44:54.542978 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-05 02:44:54.542987 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-05 02:44:54.542996 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-05 02:44:54.543004 | orchestrator | 2026-04-05 02:44:54.543013 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-05 02:44:54.543021 | orchestrator | Sunday 05 April 2026 02:44:46 +0000 (0:00:01.958) 0:00:04.437 ********** 2026-04-05 02:44:54.543044 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:44:54.543054 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:44:54.543063 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:44:54.543071 | orchestrator | 2026-04-05 02:44:54.543080 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-05 02:44:54.543088 | orchestrator | Sunday 05 April 2026 02:44:47 +0000 (0:00:01.581) 0:00:06.019 ********** 2026-04-05 02:44:54.543097 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:44:54.543106 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:44:54.543114 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:44:54.543123 | orchestrator | 2026-04-05 02:44:54.543131 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:44:54.543140 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:44:54.543151 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:44:54.543159 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:44:54.543168 | orchestrator | 2026-04-05 02:44:54.543177 | orchestrator | 2026-04-05 02:44:54.543185 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:44:54.543194 | orchestrator | Sunday 05 April 2026 02:44:54 +0000 (0:00:06.450) 0:00:12.470 ********** 2026-04-05 02:44:54.543203 | orchestrator | =============================================================================== 2026-04-05 02:44:54.543211 | orchestrator | memcached : Restart memcached container --------------------------------- 6.45s 2026-04-05 02:44:54.543220 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.96s 2026-04-05 02:44:54.543229 | orchestrator | memcached : Check memcached container ----------------------------------- 1.58s 2026-04-05 02:44:54.543237 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.84s 2026-04-05 02:44:54.543246 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.53s 2026-04-05 02:44:54.543254 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-04-05 02:44:54.543263 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-05 02:44:57.026785 | orchestrator | 2026-04-05 02:44:57 | INFO  | Task 8d200311-990d-4b74-905f-8f2655e83001 (redis) was prepared for execution. 2026-04-05 02:44:57.026973 | orchestrator | 2026-04-05 02:44:57 | INFO  | It takes a moment until task 8d200311-990d-4b74-905f-8f2655e83001 (redis) has been started and output is visible here. 2026-04-05 02:45:06.384798 | orchestrator | 2026-04-05 02:45:06.384996 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:45:06.385015 | orchestrator | 2026-04-05 02:45:06.385028 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:45:06.385040 | orchestrator | Sunday 05 April 2026 02:45:01 +0000 (0:00:00.309) 0:00:00.309 ********** 2026-04-05 02:45:06.385051 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:45:06.385063 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:45:06.385074 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:45:06.385085 | orchestrator | 2026-04-05 02:45:06.385096 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:45:06.385107 | orchestrator | Sunday 05 April 2026 02:45:01 +0000 (0:00:00.313) 0:00:00.623 ********** 2026-04-05 02:45:06.385118 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-05 02:45:06.385129 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-05 02:45:06.385140 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-05 02:45:06.385151 | orchestrator | 2026-04-05 02:45:06.385162 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-05 02:45:06.385173 | orchestrator | 2026-04-05 02:45:06.385184 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-05 02:45:06.385195 | orchestrator | Sunday 05 April 2026 02:45:02 +0000 (0:00:00.443) 0:00:01.066 ********** 2026-04-05 02:45:06.385205 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:45:06.385217 | orchestrator | 2026-04-05 02:45:06.385228 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-05 02:45:06.385239 | orchestrator | Sunday 05 April 2026 02:45:02 +0000 (0:00:00.564) 0:00:01.630 ********** 2026-04-05 02:45:06.385253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385379 | orchestrator | 2026-04-05 02:45:06.385394 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-05 02:45:06.385407 | orchestrator | Sunday 05 April 2026 02:45:03 +0000 (0:00:01.154) 0:00:02.785 ********** 2026-04-05 02:45:06.385418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:06.385593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581258 | orchestrator | 2026-04-05 02:45:10.581274 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-05 02:45:10.581286 | orchestrator | Sunday 05 April 2026 02:45:06 +0000 (0:00:02.403) 0:00:05.188 ********** 2026-04-05 02:45:10.581298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581462 | orchestrator | 2026-04-05 02:45:10.581481 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-05 02:45:10.581499 | orchestrator | Sunday 05 April 2026 02:45:08 +0000 (0:00:02.502) 0:00:07.690 ********** 2026-04-05 02:45:10.581515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:10.581589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 02:45:26.530450 | orchestrator | 2026-04-05 02:45:26.530562 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 02:45:26.530578 | orchestrator | Sunday 05 April 2026 02:45:10 +0000 (0:00:01.491) 0:00:09.182 ********** 2026-04-05 02:45:26.530590 | orchestrator | 2026-04-05 02:45:26.530599 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 02:45:26.530609 | orchestrator | Sunday 05 April 2026 02:45:10 +0000 (0:00:00.069) 0:00:09.252 ********** 2026-04-05 02:45:26.530619 | orchestrator | 2026-04-05 02:45:26.530629 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 02:45:26.530638 | orchestrator | Sunday 05 April 2026 02:45:10 +0000 (0:00:00.068) 0:00:09.320 ********** 2026-04-05 02:45:26.530648 | orchestrator | 2026-04-05 02:45:26.530657 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-05 02:45:26.530667 | orchestrator | Sunday 05 April 2026 02:45:10 +0000 (0:00:00.064) 0:00:09.385 ********** 2026-04-05 02:45:26.530676 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:45:26.530687 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:45:26.530697 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:45:26.530706 | orchestrator | 2026-04-05 02:45:26.530716 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-05 02:45:26.530726 | orchestrator | Sunday 05 April 2026 02:45:18 +0000 (0:00:07.553) 0:00:16.939 ********** 2026-04-05 02:45:26.530762 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:45:26.530773 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:45:26.530782 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:45:26.530792 | orchestrator | 2026-04-05 02:45:26.530802 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:45:26.530812 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:45:26.530823 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:45:26.530846 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:45:26.530934 | orchestrator | 2026-04-05 02:45:26.530945 | orchestrator | 2026-04-05 02:45:26.530955 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:45:26.530964 | orchestrator | Sunday 05 April 2026 02:45:26 +0000 (0:00:08.023) 0:00:24.962 ********** 2026-04-05 02:45:26.530974 | orchestrator | =============================================================================== 2026-04-05 02:45:26.530983 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.02s 2026-04-05 02:45:26.530993 | orchestrator | redis : Restart redis container ----------------------------------------- 7.55s 2026-04-05 02:45:26.531005 | orchestrator | redis : Copying over redis config files --------------------------------- 2.50s 2026-04-05 02:45:26.531016 | orchestrator | redis : Copying over default config.json files -------------------------- 2.40s 2026-04-05 02:45:26.531027 | orchestrator | redis : Check redis containers ------------------------------------------ 1.49s 2026-04-05 02:45:26.531038 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.15s 2026-04-05 02:45:26.531049 | orchestrator | redis : include_tasks --------------------------------------------------- 0.56s 2026-04-05 02:45:26.531060 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-04-05 02:45:26.531071 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-05 02:45:26.531082 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-04-05 02:45:29.085119 | orchestrator | 2026-04-05 02:45:29 | INFO  | Task c5f7a62f-7859-43d1-8830-98a90043b31a (mariadb) was prepared for execution. 2026-04-05 02:45:29.085228 | orchestrator | 2026-04-05 02:45:29 | INFO  | It takes a moment until task c5f7a62f-7859-43d1-8830-98a90043b31a (mariadb) has been started and output is visible here. 2026-04-05 02:45:43.243454 | orchestrator | 2026-04-05 02:45:43.243565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:45:43.243581 | orchestrator | 2026-04-05 02:45:43.243592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:45:43.243602 | orchestrator | Sunday 05 April 2026 02:45:33 +0000 (0:00:00.165) 0:00:00.165 ********** 2026-04-05 02:45:43.243612 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:45:43.243623 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:45:43.243633 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:45:43.243642 | orchestrator | 2026-04-05 02:45:43.243652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:45:43.243663 | orchestrator | Sunday 05 April 2026 02:45:33 +0000 (0:00:00.350) 0:00:00.516 ********** 2026-04-05 02:45:43.243673 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 02:45:43.243684 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 02:45:43.243693 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 02:45:43.243703 | orchestrator | 2026-04-05 02:45:43.243712 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 02:45:43.243722 | orchestrator | 2026-04-05 02:45:43.243732 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 02:45:43.243765 | orchestrator | Sunday 05 April 2026 02:45:34 +0000 (0:00:00.544) 0:00:01.060 ********** 2026-04-05 02:45:43.243776 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 02:45:43.243786 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 02:45:43.243795 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 02:45:43.243805 | orchestrator | 2026-04-05 02:45:43.243814 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 02:45:43.243824 | orchestrator | Sunday 05 April 2026 02:45:34 +0000 (0:00:00.381) 0:00:01.442 ********** 2026-04-05 02:45:43.243834 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:45:43.243845 | orchestrator | 2026-04-05 02:45:43.243855 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-05 02:45:43.243948 | orchestrator | Sunday 05 April 2026 02:45:35 +0000 (0:00:00.527) 0:00:01.970 ********** 2026-04-05 02:45:43.243978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:43.244015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:43.244043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:43.244057 | orchestrator | 2026-04-05 02:45:43.244069 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-05 02:45:43.244080 | orchestrator | Sunday 05 April 2026 02:45:37 +0000 (0:00:02.682) 0:00:04.652 ********** 2026-04-05 02:45:43.244092 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:45:43.244105 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:45:43.244120 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:45:43.244137 | orchestrator | 2026-04-05 02:45:43.244153 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-05 02:45:43.244170 | orchestrator | Sunday 05 April 2026 02:45:38 +0000 (0:00:00.652) 0:00:05.304 ********** 2026-04-05 02:45:43.244186 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:45:43.244201 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:45:43.244233 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:45:43.244261 | orchestrator | 2026-04-05 02:45:43.244276 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-05 02:45:43.244292 | orchestrator | Sunday 05 April 2026 02:45:40 +0000 (0:00:01.477) 0:00:06.782 ********** 2026-04-05 02:45:43.244325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:51.026283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:51.026407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:51.026448 | orchestrator | 2026-04-05 02:45:51.026460 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-05 02:45:51.026471 | orchestrator | Sunday 05 April 2026 02:45:43 +0000 (0:00:03.208) 0:00:09.991 ********** 2026-04-05 02:45:51.026480 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:45:51.026490 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:45:51.026499 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:45:51.026507 | orchestrator | 2026-04-05 02:45:51.026517 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-05 02:45:51.026542 | orchestrator | Sunday 05 April 2026 02:45:44 +0000 (0:00:01.113) 0:00:11.104 ********** 2026-04-05 02:45:51.026552 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:45:51.026560 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:45:51.026569 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:45:51.026578 | orchestrator | 2026-04-05 02:45:51.026587 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 02:45:51.026595 | orchestrator | Sunday 05 April 2026 02:45:48 +0000 (0:00:03.904) 0:00:15.009 ********** 2026-04-05 02:45:51.026605 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:45:51.026613 | orchestrator | 2026-04-05 02:45:51.026622 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 02:45:51.026631 | orchestrator | Sunday 05 April 2026 02:45:48 +0000 (0:00:00.537) 0:00:15.546 ********** 2026-04-05 02:45:51.026647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:51.026664 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:45:51.026681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:55.865012 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:45:55.865140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:55.865181 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:45:55.865194 | orchestrator | 2026-04-05 02:45:55.865207 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 02:45:55.865218 | orchestrator | Sunday 05 April 2026 02:45:51 +0000 (0:00:02.227) 0:00:17.774 ********** 2026-04-05 02:45:55.865231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:55.865243 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:45:55.865281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:55.865305 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:45:55.865317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:55.865329 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:45:55.865341 | orchestrator | 2026-04-05 02:45:55.865352 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 02:45:55.865363 | orchestrator | Sunday 05 April 2026 02:45:53 +0000 (0:00:02.505) 0:00:20.279 ********** 2026-04-05 02:45:55.865389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:58.871265 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:45:58.871386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:58.871412 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:45:58.871448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 02:45:58.871490 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:45:58.871506 | orchestrator | 2026-04-05 02:45:58.871525 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-05 02:45:58.871543 | orchestrator | Sunday 05 April 2026 02:45:55 +0000 (0:00:02.338) 0:00:22.618 ********** 2026-04-05 02:45:58.871582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:58.871595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:45:58.871620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 02:48:17.472061 | orchestrator | 2026-04-05 02:48:17.472174 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-05 02:48:17.472192 | orchestrator | Sunday 05 April 2026 02:45:58 +0000 (0:00:03.002) 0:00:25.621 ********** 2026-04-05 02:48:17.472204 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:17.472217 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:48:17.472228 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:48:17.472239 | orchestrator | 2026-04-05 02:48:17.472250 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-05 02:48:17.472261 | orchestrator | Sunday 05 April 2026 02:45:59 +0000 (0:00:00.841) 0:00:26.462 ********** 2026-04-05 02:48:17.472272 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.472284 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:17.472295 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:17.472306 | orchestrator | 2026-04-05 02:48:17.472317 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-05 02:48:17.472328 | orchestrator | Sunday 05 April 2026 02:46:00 +0000 (0:00:00.559) 0:00:27.022 ********** 2026-04-05 02:48:17.472339 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.472349 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:17.472360 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:17.472371 | orchestrator | 2026-04-05 02:48:17.472382 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-05 02:48:17.472392 | orchestrator | Sunday 05 April 2026 02:46:00 +0000 (0:00:00.341) 0:00:27.363 ********** 2026-04-05 02:48:17.472404 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-05 02:48:17.472417 | orchestrator | ...ignoring 2026-04-05 02:48:17.472429 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-05 02:48:17.472440 | orchestrator | ...ignoring 2026-04-05 02:48:17.472452 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-05 02:48:17.472462 | orchestrator | ...ignoring 2026-04-05 02:48:17.472499 | orchestrator | 2026-04-05 02:48:17.472511 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-05 02:48:17.472522 | orchestrator | Sunday 05 April 2026 02:46:11 +0000 (0:00:10.912) 0:00:38.276 ********** 2026-04-05 02:48:17.472533 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.472544 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:17.472554 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:17.472565 | orchestrator | 2026-04-05 02:48:17.472576 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-05 02:48:17.472587 | orchestrator | Sunday 05 April 2026 02:46:11 +0000 (0:00:00.428) 0:00:38.704 ********** 2026-04-05 02:48:17.472600 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:17.472613 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:17.472626 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:17.472638 | orchestrator | 2026-04-05 02:48:17.472651 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-05 02:48:17.472664 | orchestrator | Sunday 05 April 2026 02:46:12 +0000 (0:00:00.673) 0:00:39.378 ********** 2026-04-05 02:48:17.472676 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:17.472688 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:17.472700 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:17.472757 | orchestrator | 2026-04-05 02:48:17.472797 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-05 02:48:17.472820 | orchestrator | Sunday 05 April 2026 02:46:13 +0000 (0:00:00.421) 0:00:39.800 ********** 2026-04-05 02:48:17.472839 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:17.472857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:17.472869 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:17.472879 | orchestrator | 2026-04-05 02:48:17.472890 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-05 02:48:17.472901 | orchestrator | Sunday 05 April 2026 02:46:13 +0000 (0:00:00.436) 0:00:40.236 ********** 2026-04-05 02:48:17.472911 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.472922 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:17.472932 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:17.472943 | orchestrator | 2026-04-05 02:48:17.472954 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-05 02:48:17.472965 | orchestrator | Sunday 05 April 2026 02:46:13 +0000 (0:00:00.456) 0:00:40.693 ********** 2026-04-05 02:48:17.472976 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:17.472987 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:17.472998 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:17.473008 | orchestrator | 2026-04-05 02:48:17.473019 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 02:48:17.473029 | orchestrator | Sunday 05 April 2026 02:46:14 +0000 (0:00:00.914) 0:00:41.607 ********** 2026-04-05 02:48:17.473040 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:17.473051 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:17.473062 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-05 02:48:17.473073 | orchestrator | 2026-04-05 02:48:17.473083 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-05 02:48:17.473094 | orchestrator | Sunday 05 April 2026 02:46:15 +0000 (0:00:00.405) 0:00:42.013 ********** 2026-04-05 02:48:17.473105 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:17.473115 | orchestrator | 2026-04-05 02:48:17.473126 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-05 02:48:17.473137 | orchestrator | Sunday 05 April 2026 02:46:25 +0000 (0:00:10.233) 0:00:52.246 ********** 2026-04-05 02:48:17.473147 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.473159 | orchestrator | 2026-04-05 02:48:17.473175 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 02:48:17.473196 | orchestrator | Sunday 05 April 2026 02:46:25 +0000 (0:00:00.142) 0:00:52.389 ********** 2026-04-05 02:48:17.473214 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:17.473258 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:17.473270 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:17.473281 | orchestrator | 2026-04-05 02:48:17.473292 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-05 02:48:17.473303 | orchestrator | Sunday 05 April 2026 02:46:26 +0000 (0:00:01.009) 0:00:53.398 ********** 2026-04-05 02:48:17.473314 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:17.473325 | orchestrator | 2026-04-05 02:48:17.473336 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-05 02:48:17.473347 | orchestrator | Sunday 05 April 2026 02:46:34 +0000 (0:00:07.954) 0:01:01.353 ********** 2026-04-05 02:48:17.473357 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.473368 | orchestrator | 2026-04-05 02:48:17.473378 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-05 02:48:17.473389 | orchestrator | Sunday 05 April 2026 02:46:36 +0000 (0:00:01.678) 0:01:03.032 ********** 2026-04-05 02:48:17.473400 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.473412 | orchestrator | 2026-04-05 02:48:17.473430 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-05 02:48:17.473448 | orchestrator | Sunday 05 April 2026 02:46:38 +0000 (0:00:02.562) 0:01:05.594 ********** 2026-04-05 02:48:17.473465 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:17.473484 | orchestrator | 2026-04-05 02:48:17.473503 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-05 02:48:17.473521 | orchestrator | Sunday 05 April 2026 02:46:38 +0000 (0:00:00.150) 0:01:05.744 ********** 2026-04-05 02:48:17.473536 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:17.473547 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:17.473558 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:17.473568 | orchestrator | 2026-04-05 02:48:17.473579 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-05 02:48:17.473590 | orchestrator | Sunday 05 April 2026 02:46:39 +0000 (0:00:00.321) 0:01:06.065 ********** 2026-04-05 02:48:17.473600 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:17.473611 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 02:48:17.473622 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:48:17.473633 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:48:17.473644 | orchestrator | 2026-04-05 02:48:17.473654 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 02:48:17.473665 | orchestrator | skipping: no hosts matched 2026-04-05 02:48:17.473676 | orchestrator | 2026-04-05 02:48:17.473686 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 02:48:17.473700 | orchestrator | 2026-04-05 02:48:17.473779 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 02:48:17.473798 | orchestrator | Sunday 05 April 2026 02:46:39 +0000 (0:00:00.586) 0:01:06.652 ********** 2026-04-05 02:48:17.473816 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:48:17.473834 | orchestrator | 2026-04-05 02:48:17.473851 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 02:48:17.473871 | orchestrator | Sunday 05 April 2026 02:46:58 +0000 (0:00:18.767) 0:01:25.420 ********** 2026-04-05 02:48:17.473890 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:17.473909 | orchestrator | 2026-04-05 02:48:17.473928 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 02:48:17.473940 | orchestrator | Sunday 05 April 2026 02:47:15 +0000 (0:00:16.604) 0:01:42.024 ********** 2026-04-05 02:48:17.473950 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:17.473961 | orchestrator | 2026-04-05 02:48:17.473976 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 02:48:17.473987 | orchestrator | 2026-04-05 02:48:17.474006 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 02:48:17.474077 | orchestrator | Sunday 05 April 2026 02:47:17 +0000 (0:00:02.486) 0:01:44.511 ********** 2026-04-05 02:48:17.474101 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:48:17.474112 | orchestrator | 2026-04-05 02:48:17.474123 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 02:48:17.474134 | orchestrator | Sunday 05 April 2026 02:47:36 +0000 (0:00:18.674) 0:02:03.185 ********** 2026-04-05 02:48:17.474144 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:17.474155 | orchestrator | 2026-04-05 02:48:17.474557 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 02:48:17.474572 | orchestrator | Sunday 05 April 2026 02:47:53 +0000 (0:00:16.601) 0:02:19.786 ********** 2026-04-05 02:48:17.474586 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:17.474605 | orchestrator | 2026-04-05 02:48:17.474624 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 02:48:17.474643 | orchestrator | 2026-04-05 02:48:17.474661 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 02:48:17.474681 | orchestrator | Sunday 05 April 2026 02:47:55 +0000 (0:00:02.762) 0:02:22.549 ********** 2026-04-05 02:48:17.474699 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:17.474778 | orchestrator | 2026-04-05 02:48:17.474792 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 02:48:17.474803 | orchestrator | Sunday 05 April 2026 02:48:08 +0000 (0:00:12.565) 0:02:35.114 ********** 2026-04-05 02:48:17.474816 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.474835 | orchestrator | 2026-04-05 02:48:17.474854 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 02:48:17.474873 | orchestrator | Sunday 05 April 2026 02:48:13 +0000 (0:00:05.615) 0:02:40.730 ********** 2026-04-05 02:48:17.474890 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:17.474907 | orchestrator | 2026-04-05 02:48:17.474926 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 02:48:17.474944 | orchestrator | 2026-04-05 02:48:17.474964 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 02:48:17.474983 | orchestrator | Sunday 05 April 2026 02:48:16 +0000 (0:00:02.753) 0:02:43.483 ********** 2026-04-05 02:48:17.475002 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:48:17.475020 | orchestrator | 2026-04-05 02:48:17.475038 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-05 02:48:17.475073 | orchestrator | Sunday 05 April 2026 02:48:17 +0000 (0:00:00.735) 0:02:44.219 ********** 2026-04-05 02:48:30.350206 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:30.350312 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:30.350326 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:30.350338 | orchestrator | 2026-04-05 02:48:30.350349 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-05 02:48:30.350361 | orchestrator | Sunday 05 April 2026 02:48:19 +0000 (0:00:02.243) 0:02:46.462 ********** 2026-04-05 02:48:30.350370 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:30.350380 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:30.350390 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:30.350400 | orchestrator | 2026-04-05 02:48:30.350410 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-05 02:48:30.350420 | orchestrator | Sunday 05 April 2026 02:48:21 +0000 (0:00:02.188) 0:02:48.650 ********** 2026-04-05 02:48:30.350430 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:30.350440 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:30.350450 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:30.350459 | orchestrator | 2026-04-05 02:48:30.350469 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-05 02:48:30.350479 | orchestrator | Sunday 05 April 2026 02:48:24 +0000 (0:00:02.445) 0:02:51.096 ********** 2026-04-05 02:48:30.350489 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:30.350498 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:30.350508 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:48:30.350518 | orchestrator | 2026-04-05 02:48:30.350550 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-05 02:48:30.350561 | orchestrator | Sunday 05 April 2026 02:48:26 +0000 (0:00:02.221) 0:02:53.317 ********** 2026-04-05 02:48:30.350570 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:30.350581 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:30.350591 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:30.350614 | orchestrator | 2026-04-05 02:48:30.350624 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 02:48:30.350633 | orchestrator | Sunday 05 April 2026 02:48:29 +0000 (0:00:02.940) 0:02:56.258 ********** 2026-04-05 02:48:30.350643 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:30.350652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:48:30.350662 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:48:30.350672 | orchestrator | 2026-04-05 02:48:30.350681 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:48:30.350692 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-05 02:48:30.350761 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-05 02:48:30.350775 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-05 02:48:30.350786 | orchestrator | 2026-04-05 02:48:30.350797 | orchestrator | 2026-04-05 02:48:30.350809 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:48:30.350820 | orchestrator | Sunday 05 April 2026 02:48:29 +0000 (0:00:00.480) 0:02:56.739 ********** 2026-04-05 02:48:30.350832 | orchestrator | =============================================================================== 2026-04-05 02:48:30.350857 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.44s 2026-04-05 02:48:30.350869 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.21s 2026-04-05 02:48:30.350881 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.57s 2026-04-05 02:48:30.350891 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2026-04-05 02:48:30.350902 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.23s 2026-04-05 02:48:30.350914 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.95s 2026-04-05 02:48:30.350925 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.62s 2026-04-05 02:48:30.350938 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.25s 2026-04-05 02:48:30.350950 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.90s 2026-04-05 02:48:30.350961 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.21s 2026-04-05 02:48:30.350972 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.00s 2026-04-05 02:48:30.350983 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.94s 2026-04-05 02:48:30.351005 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.75s 2026-04-05 02:48:30.351016 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.68s 2026-04-05 02:48:30.351026 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.56s 2026-04-05 02:48:30.351038 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.51s 2026-04-05 02:48:30.351050 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.45s 2026-04-05 02:48:30.351061 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.34s 2026-04-05 02:48:30.351072 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.24s 2026-04-05 02:48:30.351084 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.23s 2026-04-05 02:48:32.900337 | orchestrator | 2026-04-05 02:48:32 | INFO  | Task da148d07-6b0f-4766-867d-e4eb843d81f0 (rabbitmq) was prepared for execution. 2026-04-05 02:48:32.900418 | orchestrator | 2026-04-05 02:48:32 | INFO  | It takes a moment until task da148d07-6b0f-4766-867d-e4eb843d81f0 (rabbitmq) has been started and output is visible here. 2026-04-05 02:48:46.419615 | orchestrator | 2026-04-05 02:48:46.419819 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:48:46.419842 | orchestrator | 2026-04-05 02:48:46.419854 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:48:46.419866 | orchestrator | Sunday 05 April 2026 02:48:37 +0000 (0:00:00.171) 0:00:00.171 ********** 2026-04-05 02:48:46.419877 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:46.419890 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:48:46.419901 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:48:46.419912 | orchestrator | 2026-04-05 02:48:46.419923 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:48:46.419934 | orchestrator | Sunday 05 April 2026 02:48:37 +0000 (0:00:00.313) 0:00:00.484 ********** 2026-04-05 02:48:46.419946 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-05 02:48:46.419958 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-05 02:48:46.419969 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-05 02:48:46.419980 | orchestrator | 2026-04-05 02:48:46.419991 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-05 02:48:46.420001 | orchestrator | 2026-04-05 02:48:46.420011 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 02:48:46.420021 | orchestrator | Sunday 05 April 2026 02:48:38 +0000 (0:00:00.556) 0:00:01.041 ********** 2026-04-05 02:48:46.420032 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:48:46.420042 | orchestrator | 2026-04-05 02:48:46.420052 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 02:48:46.420062 | orchestrator | Sunday 05 April 2026 02:48:38 +0000 (0:00:00.544) 0:00:01.586 ********** 2026-04-05 02:48:46.420072 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:46.420081 | orchestrator | 2026-04-05 02:48:46.420091 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-05 02:48:46.420101 | orchestrator | Sunday 05 April 2026 02:48:39 +0000 (0:00:01.017) 0:00:02.603 ********** 2026-04-05 02:48:46.420111 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:46.420122 | orchestrator | 2026-04-05 02:48:46.420131 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-05 02:48:46.420141 | orchestrator | Sunday 05 April 2026 02:48:40 +0000 (0:00:00.382) 0:00:02.985 ********** 2026-04-05 02:48:46.420151 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:46.420161 | orchestrator | 2026-04-05 02:48:46.420170 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-05 02:48:46.420180 | orchestrator | Sunday 05 April 2026 02:48:40 +0000 (0:00:00.397) 0:00:03.383 ********** 2026-04-05 02:48:46.420189 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:46.420199 | orchestrator | 2026-04-05 02:48:46.420209 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-05 02:48:46.420218 | orchestrator | Sunday 05 April 2026 02:48:40 +0000 (0:00:00.392) 0:00:03.776 ********** 2026-04-05 02:48:46.420228 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:46.420238 | orchestrator | 2026-04-05 02:48:46.420248 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 02:48:46.420258 | orchestrator | Sunday 05 April 2026 02:48:41 +0000 (0:00:00.577) 0:00:04.354 ********** 2026-04-05 02:48:46.420284 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:48:46.420315 | orchestrator | 2026-04-05 02:48:46.420326 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 02:48:46.420335 | orchestrator | Sunday 05 April 2026 02:48:42 +0000 (0:00:00.904) 0:00:05.259 ********** 2026-04-05 02:48:46.420345 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:48:46.420354 | orchestrator | 2026-04-05 02:48:46.420364 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-05 02:48:46.420374 | orchestrator | Sunday 05 April 2026 02:48:43 +0000 (0:00:00.934) 0:00:06.193 ********** 2026-04-05 02:48:46.420383 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:46.420393 | orchestrator | 2026-04-05 02:48:46.420403 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-05 02:48:46.420412 | orchestrator | Sunday 05 April 2026 02:48:43 +0000 (0:00:00.383) 0:00:06.577 ********** 2026-04-05 02:48:46.420422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:48:46.420431 | orchestrator | 2026-04-05 02:48:46.420441 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-05 02:48:46.420454 | orchestrator | Sunday 05 April 2026 02:48:43 +0000 (0:00:00.374) 0:00:06.952 ********** 2026-04-05 02:48:46.420498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:48:46.420521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:48:46.420540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:48:46.420568 | orchestrator | 2026-04-05 02:48:46.420593 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-05 02:48:46.420612 | orchestrator | Sunday 05 April 2026 02:48:44 +0000 (0:00:00.842) 0:00:07.794 ********** 2026-04-05 02:48:46.420629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:48:46.420662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:49:05.285775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:49:05.285866 | orchestrator | 2026-04-05 02:49:05.285877 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-05 02:49:05.285886 | orchestrator | Sunday 05 April 2026 02:48:46 +0000 (0:00:01.581) 0:00:09.376 ********** 2026-04-05 02:49:05.285910 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 02:49:05.285918 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 02:49:05.285925 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 02:49:05.285931 | orchestrator | 2026-04-05 02:49:05.285937 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-05 02:49:05.285943 | orchestrator | Sunday 05 April 2026 02:48:47 +0000 (0:00:01.520) 0:00:10.897 ********** 2026-04-05 02:49:05.285968 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 02:49:05.285980 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 02:49:05.285989 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 02:49:05.285999 | orchestrator | 2026-04-05 02:49:05.286009 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-05 02:49:05.286078 | orchestrator | Sunday 05 April 2026 02:48:49 +0000 (0:00:01.739) 0:00:12.637 ********** 2026-04-05 02:49:05.286087 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 02:49:05.286094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 02:49:05.286100 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 02:49:05.286106 | orchestrator | 2026-04-05 02:49:05.286112 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-05 02:49:05.286118 | orchestrator | Sunday 05 April 2026 02:48:51 +0000 (0:00:01.378) 0:00:14.015 ********** 2026-04-05 02:49:05.286125 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 02:49:05.286131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 02:49:05.286137 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 02:49:05.286143 | orchestrator | 2026-04-05 02:49:05.286149 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-05 02:49:05.286155 | orchestrator | Sunday 05 April 2026 02:48:52 +0000 (0:00:01.692) 0:00:15.707 ********** 2026-04-05 02:49:05.286162 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 02:49:05.286168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 02:49:05.286174 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 02:49:05.286180 | orchestrator | 2026-04-05 02:49:05.286186 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-05 02:49:05.286193 | orchestrator | Sunday 05 April 2026 02:48:54 +0000 (0:00:01.418) 0:00:17.125 ********** 2026-04-05 02:49:05.286199 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 02:49:05.286205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 02:49:05.286211 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 02:49:05.286217 | orchestrator | 2026-04-05 02:49:05.286224 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 02:49:05.286230 | orchestrator | Sunday 05 April 2026 02:48:55 +0000 (0:00:01.368) 0:00:18.494 ********** 2026-04-05 02:49:05.286236 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:49:05.286243 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:49:05.286264 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:49:05.286278 | orchestrator | 2026-04-05 02:49:05.286286 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-05 02:49:05.286294 | orchestrator | Sunday 05 April 2026 02:48:55 +0000 (0:00:00.409) 0:00:18.904 ********** 2026-04-05 02:49:05.286302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:49:05.286316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:49:05.286325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 02:49:05.286333 | orchestrator | 2026-04-05 02:49:05.286341 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-05 02:49:05.286348 | orchestrator | Sunday 05 April 2026 02:48:57 +0000 (0:00:01.234) 0:00:20.139 ********** 2026-04-05 02:49:05.286355 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:49:05.286363 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:49:05.286370 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:49:05.286377 | orchestrator | 2026-04-05 02:49:05.286385 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-05 02:49:05.286397 | orchestrator | Sunday 05 April 2026 02:48:58 +0000 (0:00:00.848) 0:00:20.987 ********** 2026-04-05 02:49:05.286404 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:49:05.286412 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:49:05.286419 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:49:05.286426 | orchestrator | 2026-04-05 02:49:05.286434 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-05 02:49:05.286446 | orchestrator | Sunday 05 April 2026 02:49:05 +0000 (0:00:07.248) 0:00:28.236 ********** 2026-04-05 02:50:46.283584 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:50:46.283742 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:50:46.283756 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:50:46.283763 | orchestrator | 2026-04-05 02:50:46.283775 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 02:50:46.283783 | orchestrator | 2026-04-05 02:50:46.283790 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 02:50:46.283796 | orchestrator | Sunday 05 April 2026 02:49:05 +0000 (0:00:00.562) 0:00:28.798 ********** 2026-04-05 02:50:46.283803 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:50:46.283810 | orchestrator | 2026-04-05 02:50:46.283817 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 02:50:46.283824 | orchestrator | Sunday 05 April 2026 02:49:06 +0000 (0:00:00.589) 0:00:29.388 ********** 2026-04-05 02:50:46.283830 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:50:46.283837 | orchestrator | 2026-04-05 02:50:46.283843 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 02:50:46.283850 | orchestrator | Sunday 05 April 2026 02:49:06 +0000 (0:00:00.243) 0:00:29.632 ********** 2026-04-05 02:50:46.283857 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:50:46.283863 | orchestrator | 2026-04-05 02:50:46.283882 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 02:50:46.283888 | orchestrator | Sunday 05 April 2026 02:49:13 +0000 (0:00:06.744) 0:00:36.376 ********** 2026-04-05 02:50:46.283894 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:50:46.283901 | orchestrator | 2026-04-05 02:50:46.283907 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 02:50:46.283911 | orchestrator | 2026-04-05 02:50:46.283915 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 02:50:46.283919 | orchestrator | Sunday 05 April 2026 02:50:04 +0000 (0:00:51.235) 0:01:27.612 ********** 2026-04-05 02:50:46.283922 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:50:46.283928 | orchestrator | 2026-04-05 02:50:46.283934 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 02:50:46.283941 | orchestrator | Sunday 05 April 2026 02:50:05 +0000 (0:00:00.645) 0:01:28.258 ********** 2026-04-05 02:50:46.283947 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:50:46.283953 | orchestrator | 2026-04-05 02:50:46.283958 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 02:50:46.283965 | orchestrator | Sunday 05 April 2026 02:50:05 +0000 (0:00:00.283) 0:01:28.542 ********** 2026-04-05 02:50:46.283971 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:50:46.283977 | orchestrator | 2026-04-05 02:50:46.283983 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 02:50:46.284061 | orchestrator | Sunday 05 April 2026 02:50:07 +0000 (0:00:01.589) 0:01:30.131 ********** 2026-04-05 02:50:46.284073 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:50:46.284079 | orchestrator | 2026-04-05 02:50:46.284086 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 02:50:46.284092 | orchestrator | 2026-04-05 02:50:46.284099 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 02:50:46.284105 | orchestrator | Sunday 05 April 2026 02:50:23 +0000 (0:00:16.251) 0:01:46.382 ********** 2026-04-05 02:50:46.284111 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:50:46.284117 | orchestrator | 2026-04-05 02:50:46.284144 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 02:50:46.284151 | orchestrator | Sunday 05 April 2026 02:50:24 +0000 (0:00:00.842) 0:01:47.225 ********** 2026-04-05 02:50:46.284157 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:50:46.284163 | orchestrator | 2026-04-05 02:50:46.284169 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 02:50:46.284175 | orchestrator | Sunday 05 April 2026 02:50:24 +0000 (0:00:00.229) 0:01:47.455 ********** 2026-04-05 02:50:46.284181 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:50:46.284188 | orchestrator | 2026-04-05 02:50:46.284194 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 02:50:46.284201 | orchestrator | Sunday 05 April 2026 02:50:26 +0000 (0:00:01.644) 0:01:49.100 ********** 2026-04-05 02:50:46.284207 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:50:46.284213 | orchestrator | 2026-04-05 02:50:46.284219 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-05 02:50:46.284229 | orchestrator | 2026-04-05 02:50:46.284234 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-05 02:50:46.284240 | orchestrator | Sunday 05 April 2026 02:50:42 +0000 (0:00:16.663) 0:02:05.764 ********** 2026-04-05 02:50:46.284246 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:50:46.284252 | orchestrator | 2026-04-05 02:50:46.284259 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-05 02:50:46.284265 | orchestrator | Sunday 05 April 2026 02:50:43 +0000 (0:00:00.530) 0:02:06.294 ********** 2026-04-05 02:50:46.284271 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 02:50:46.284278 | orchestrator | enable_outward_rabbitmq_True 2026-04-05 02:50:46.284284 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 02:50:46.284290 | orchestrator | outward_rabbitmq_restart 2026-04-05 02:50:46.284297 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:50:46.284303 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:50:46.284309 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:50:46.284315 | orchestrator | 2026-04-05 02:50:46.284321 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-05 02:50:46.284327 | orchestrator | skipping: no hosts matched 2026-04-05 02:50:46.284334 | orchestrator | 2026-04-05 02:50:46.284340 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-05 02:50:46.284346 | orchestrator | skipping: no hosts matched 2026-04-05 02:50:46.284353 | orchestrator | 2026-04-05 02:50:46.284360 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-05 02:50:46.284366 | orchestrator | skipping: no hosts matched 2026-04-05 02:50:46.284372 | orchestrator | 2026-04-05 02:50:46.284377 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:50:46.284404 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 02:50:46.284413 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:50:46.284420 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:50:46.284426 | orchestrator | 2026-04-05 02:50:46.284432 | orchestrator | 2026-04-05 02:50:46.284438 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:50:46.284444 | orchestrator | Sunday 05 April 2026 02:50:45 +0000 (0:00:02.572) 0:02:08.867 ********** 2026-04-05 02:50:46.284451 | orchestrator | =============================================================================== 2026-04-05 02:50:46.284457 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.15s 2026-04-05 02:50:46.284464 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.98s 2026-04-05 02:50:46.284477 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.25s 2026-04-05 02:50:46.284485 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.57s 2026-04-05 02:50:46.284492 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.08s 2026-04-05 02:50:46.284498 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.74s 2026-04-05 02:50:46.284504 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.69s 2026-04-05 02:50:46.284513 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.58s 2026-04-05 02:50:46.284519 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.52s 2026-04-05 02:50:46.284525 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.42s 2026-04-05 02:50:46.284531 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.38s 2026-04-05 02:50:46.284538 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.37s 2026-04-05 02:50:46.284544 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.23s 2026-04-05 02:50:46.284551 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.02s 2026-04-05 02:50:46.284563 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.93s 2026-04-05 02:50:46.284570 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.90s 2026-04-05 02:50:46.284643 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.85s 2026-04-05 02:50:46.284651 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.84s 2026-04-05 02:50:46.284657 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.76s 2026-04-05 02:50:46.284663 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.58s 2026-04-05 02:50:48.889195 | orchestrator | 2026-04-05 02:50:48 | INFO  | Task 71e011f2-3869-46ae-a4a8-250eaa6402cc (openvswitch) was prepared for execution. 2026-04-05 02:50:48.889283 | orchestrator | 2026-04-05 02:50:48 | INFO  | It takes a moment until task 71e011f2-3869-46ae-a4a8-250eaa6402cc (openvswitch) has been started and output is visible here. 2026-04-05 02:51:02.336138 | orchestrator | 2026-04-05 02:51:02.336239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:51:02.336254 | orchestrator | 2026-04-05 02:51:02.336263 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:51:02.336273 | orchestrator | Sunday 05 April 2026 02:50:53 +0000 (0:00:00.273) 0:00:00.273 ********** 2026-04-05 02:51:02.336282 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:51:02.336292 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:51:02.336301 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:51:02.336311 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:51:02.336326 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:51:02.336340 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:51:02.336354 | orchestrator | 2026-04-05 02:51:02.336369 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:51:02.336384 | orchestrator | Sunday 05 April 2026 02:50:54 +0000 (0:00:00.803) 0:00:01.077 ********** 2026-04-05 02:51:02.336399 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 02:51:02.336412 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 02:51:02.336421 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 02:51:02.336430 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 02:51:02.336439 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 02:51:02.336447 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 02:51:02.336456 | orchestrator | 2026-04-05 02:51:02.336487 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-05 02:51:02.336496 | orchestrator | 2026-04-05 02:51:02.336506 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-05 02:51:02.336515 | orchestrator | Sunday 05 April 2026 02:50:54 +0000 (0:00:00.628) 0:00:01.705 ********** 2026-04-05 02:51:02.336525 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:51:02.336535 | orchestrator | 2026-04-05 02:51:02.336543 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 02:51:02.336552 | orchestrator | Sunday 05 April 2026 02:50:56 +0000 (0:00:01.316) 0:00:03.021 ********** 2026-04-05 02:51:02.336561 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 02:51:02.336570 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 02:51:02.336624 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 02:51:02.336636 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 02:51:02.336645 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 02:51:02.336654 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 02:51:02.336662 | orchestrator | 2026-04-05 02:51:02.336671 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 02:51:02.336680 | orchestrator | Sunday 05 April 2026 02:50:57 +0000 (0:00:01.239) 0:00:04.261 ********** 2026-04-05 02:51:02.336691 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-05 02:51:02.336702 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-05 02:51:02.336712 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-05 02:51:02.336722 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-05 02:51:02.336733 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-05 02:51:02.336743 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-05 02:51:02.336753 | orchestrator | 2026-04-05 02:51:02.336763 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 02:51:02.336774 | orchestrator | Sunday 05 April 2026 02:50:58 +0000 (0:00:01.522) 0:00:05.784 ********** 2026-04-05 02:51:02.336784 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-05 02:51:02.336794 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:51:02.336805 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-05 02:51:02.336818 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:51:02.336833 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-05 02:51:02.336847 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:51:02.336862 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-05 02:51:02.336876 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:51:02.336890 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-05 02:51:02.336904 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:51:02.336917 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-05 02:51:02.336932 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:51:02.336947 | orchestrator | 2026-04-05 02:51:02.336962 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-05 02:51:02.336978 | orchestrator | Sunday 05 April 2026 02:51:00 +0000 (0:00:01.206) 0:00:06.990 ********** 2026-04-05 02:51:02.336993 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:51:02.337008 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:51:02.337022 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:51:02.337031 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:51:02.337040 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:51:02.337048 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:51:02.337057 | orchestrator | 2026-04-05 02:51:02.337066 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-05 02:51:02.337085 | orchestrator | Sunday 05 April 2026 02:51:00 +0000 (0:00:00.791) 0:00:07.781 ********** 2026-04-05 02:51:02.337116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:02.337134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:02.337149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:02.337269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:02.337308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:02.337337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:04.855959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856266 | orchestrator | 2026-04-05 02:51:04.856287 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-05 02:51:04.856307 | orchestrator | Sunday 05 April 2026 02:51:02 +0000 (0:00:01.528) 0:00:09.310 ********** 2026-04-05 02:51:04.856327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:04.856456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595625 | orchestrator | 2026-04-05 02:51:07.595638 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-05 02:51:07.595651 | orchestrator | Sunday 05 April 2026 02:51:04 +0000 (0:00:02.513) 0:00:11.824 ********** 2026-04-05 02:51:07.595662 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:51:07.595675 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:51:07.595686 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:51:07.595697 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:51:07.595707 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:51:07.595718 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:51:07.595730 | orchestrator | 2026-04-05 02:51:07.595741 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-05 02:51:07.595752 | orchestrator | Sunday 05 April 2026 02:51:05 +0000 (0:00:00.989) 0:00:12.813 ********** 2026-04-05 02:51:07.595764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:07.595837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 02:51:32.952853 | orchestrator | 2026-04-05 02:51:32.952862 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 02:51:32.952871 | orchestrator | Sunday 05 April 2026 02:51:07 +0000 (0:00:01.764) 0:00:14.577 ********** 2026-04-05 02:51:32.952879 | orchestrator | 2026-04-05 02:51:32.952886 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 02:51:32.952893 | orchestrator | Sunday 05 April 2026 02:51:08 +0000 (0:00:00.330) 0:00:14.908 ********** 2026-04-05 02:51:32.952907 | orchestrator | 2026-04-05 02:51:32.952914 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 02:51:32.952922 | orchestrator | Sunday 05 April 2026 02:51:08 +0000 (0:00:00.133) 0:00:15.041 ********** 2026-04-05 02:51:32.952929 | orchestrator | 2026-04-05 02:51:32.952936 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 02:51:32.952943 | orchestrator | Sunday 05 April 2026 02:51:08 +0000 (0:00:00.134) 0:00:15.175 ********** 2026-04-05 02:51:32.952951 | orchestrator | 2026-04-05 02:51:32.952958 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 02:51:32.952965 | orchestrator | Sunday 05 April 2026 02:51:08 +0000 (0:00:00.129) 0:00:15.304 ********** 2026-04-05 02:51:32.952973 | orchestrator | 2026-04-05 02:51:32.952980 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 02:51:32.952987 | orchestrator | Sunday 05 April 2026 02:51:08 +0000 (0:00:00.136) 0:00:15.441 ********** 2026-04-05 02:51:32.952994 | orchestrator | 2026-04-05 02:51:32.953002 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-05 02:51:32.953009 | orchestrator | Sunday 05 April 2026 02:51:08 +0000 (0:00:00.130) 0:00:15.572 ********** 2026-04-05 02:51:32.953016 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:51:32.953025 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:51:32.953033 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:51:32.953040 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:51:32.953047 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:51:32.953054 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:51:32.953062 | orchestrator | 2026-04-05 02:51:32.953069 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-05 02:51:32.953077 | orchestrator | Sunday 05 April 2026 02:51:17 +0000 (0:00:08.625) 0:00:24.197 ********** 2026-04-05 02:51:32.953084 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:51:32.953097 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:51:32.953104 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:51:32.953111 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:51:32.953119 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:51:32.953126 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:51:32.953133 | orchestrator | 2026-04-05 02:51:32.953141 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-05 02:51:32.953148 | orchestrator | Sunday 05 April 2026 02:51:18 +0000 (0:00:01.094) 0:00:25.292 ********** 2026-04-05 02:51:32.953156 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:51:32.953163 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:51:32.953170 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:51:32.953177 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:51:32.953185 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:51:32.953194 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:51:32.953202 | orchestrator | 2026-04-05 02:51:32.953211 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-05 02:51:32.953220 | orchestrator | Sunday 05 April 2026 02:51:26 +0000 (0:00:08.056) 0:00:33.349 ********** 2026-04-05 02:51:32.953229 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-05 02:51:32.953238 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-05 02:51:32.953247 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-05 02:51:32.953255 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-05 02:51:32.953264 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-05 02:51:32.953272 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-05 02:51:32.953281 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-05 02:51:32.953300 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-05 02:51:46.545182 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-05 02:51:46.545281 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-05 02:51:46.545293 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-05 02:51:46.545300 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-05 02:51:46.545308 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 02:51:46.545316 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 02:51:46.545323 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 02:51:46.545330 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 02:51:46.545337 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 02:51:46.545344 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 02:51:46.545352 | orchestrator | 2026-04-05 02:51:46.545360 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-05 02:51:46.545369 | orchestrator | Sunday 05 April 2026 02:51:32 +0000 (0:00:06.487) 0:00:39.836 ********** 2026-04-05 02:51:46.545377 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-05 02:51:46.545385 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:51:46.545394 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-05 02:51:46.545401 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:51:46.545408 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-05 02:51:46.545415 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:51:46.545422 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-05 02:51:46.545429 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-05 02:51:46.545436 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-05 02:51:46.545443 | orchestrator | 2026-04-05 02:51:46.545450 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-05 02:51:46.545457 | orchestrator | Sunday 05 April 2026 02:51:35 +0000 (0:00:02.555) 0:00:42.392 ********** 2026-04-05 02:51:46.545465 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-05 02:51:46.545472 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:51:46.545479 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-05 02:51:46.545487 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:51:46.545494 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-05 02:51:46.545501 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:51:46.545508 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-05 02:51:46.545516 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-05 02:51:46.545538 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-05 02:51:46.545547 | orchestrator | 2026-04-05 02:51:46.545631 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-05 02:51:46.545639 | orchestrator | Sunday 05 April 2026 02:51:38 +0000 (0:00:03.446) 0:00:45.838 ********** 2026-04-05 02:51:46.545645 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:51:46.545651 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:51:46.545679 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:51:46.545687 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:51:46.545694 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:51:46.545701 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:51:46.545708 | orchestrator | 2026-04-05 02:51:46.545715 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:51:46.545724 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 02:51:46.545733 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 02:51:46.545741 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 02:51:46.545749 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 02:51:46.545757 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 02:51:46.545764 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 02:51:46.545773 | orchestrator | 2026-04-05 02:51:46.545780 | orchestrator | 2026-04-05 02:51:46.545788 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:51:46.545797 | orchestrator | Sunday 05 April 2026 02:51:46 +0000 (0:00:07.152) 0:00:52.991 ********** 2026-04-05 02:51:46.545823 | orchestrator | =============================================================================== 2026-04-05 02:51:46.545833 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.21s 2026-04-05 02:51:46.545840 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.63s 2026-04-05 02:51:46.545847 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.49s 2026-04-05 02:51:46.545854 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.45s 2026-04-05 02:51:46.545861 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.56s 2026-04-05 02:51:46.545868 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.51s 2026-04-05 02:51:46.545875 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.76s 2026-04-05 02:51:46.545882 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.53s 2026-04-05 02:51:46.545889 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.52s 2026-04-05 02:51:46.545896 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.32s 2026-04-05 02:51:46.545903 | orchestrator | module-load : Load modules ---------------------------------------------- 1.24s 2026-04-05 02:51:46.545910 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.21s 2026-04-05 02:51:46.545917 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.09s 2026-04-05 02:51:46.545923 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.99s 2026-04-05 02:51:46.545931 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.99s 2026-04-05 02:51:46.545937 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2026-04-05 02:51:46.545944 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.79s 2026-04-05 02:51:46.545951 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-04-05 02:51:49.051095 | orchestrator | 2026-04-05 02:51:49 | INFO  | Task 7d5108d5-3c33-4628-99e1-b96bd82dd018 (ovn) was prepared for execution. 2026-04-05 02:51:49.051170 | orchestrator | 2026-04-05 02:51:49 | INFO  | It takes a moment until task 7d5108d5-3c33-4628-99e1-b96bd82dd018 (ovn) has been started and output is visible here. 2026-04-05 02:52:00.313320 | orchestrator | 2026-04-05 02:52:00.313426 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 02:52:00.313438 | orchestrator | 2026-04-05 02:52:00.313446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 02:52:00.313453 | orchestrator | Sunday 05 April 2026 02:51:53 +0000 (0:00:00.167) 0:00:00.167 ********** 2026-04-05 02:52:00.313460 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:52:00.313468 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:52:00.313475 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:52:00.313482 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:52:00.313489 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:52:00.313496 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:52:00.313503 | orchestrator | 2026-04-05 02:52:00.313510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 02:52:00.313517 | orchestrator | Sunday 05 April 2026 02:51:54 +0000 (0:00:00.808) 0:00:00.976 ********** 2026-04-05 02:52:00.313539 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-05 02:52:00.313574 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-05 02:52:00.313582 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-05 02:52:00.313589 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-05 02:52:00.313596 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-05 02:52:00.313603 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-05 02:52:00.313609 | orchestrator | 2026-04-05 02:52:00.313617 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-05 02:52:00.313624 | orchestrator | 2026-04-05 02:52:00.313631 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-05 02:52:00.313638 | orchestrator | Sunday 05 April 2026 02:51:55 +0000 (0:00:00.946) 0:00:01.923 ********** 2026-04-05 02:52:00.313646 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:52:00.313654 | orchestrator | 2026-04-05 02:52:00.313662 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-05 02:52:00.313673 | orchestrator | Sunday 05 April 2026 02:51:56 +0000 (0:00:01.263) 0:00:03.187 ********** 2026-04-05 02:52:00.313687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313795 | orchestrator | 2026-04-05 02:52:00.313802 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-05 02:52:00.313809 | orchestrator | Sunday 05 April 2026 02:51:57 +0000 (0:00:01.187) 0:00:04.374 ********** 2026-04-05 02:52:00.313821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313873 | orchestrator | 2026-04-05 02:52:00.313881 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-05 02:52:00.313889 | orchestrator | Sunday 05 April 2026 02:51:59 +0000 (0:00:01.510) 0:00:05.884 ********** 2026-04-05 02:52:00.313897 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:00.313919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.146952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147087 | orchestrator | 2026-04-05 02:52:25.147100 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-05 02:52:25.147112 | orchestrator | Sunday 05 April 2026 02:52:00 +0000 (0:00:01.182) 0:00:07.067 ********** 2026-04-05 02:52:25.147124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147234 | orchestrator | 2026-04-05 02:52:25.147245 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-05 02:52:25.147256 | orchestrator | Sunday 05 April 2026 02:52:01 +0000 (0:00:01.564) 0:00:08.632 ********** 2026-04-05 02:52:25.147273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:52:25.147349 | orchestrator | 2026-04-05 02:52:25.147360 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-05 02:52:25.147371 | orchestrator | Sunday 05 April 2026 02:52:03 +0000 (0:00:01.401) 0:00:10.033 ********** 2026-04-05 02:52:25.147383 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:52:25.147395 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:52:25.147406 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:52:25.147416 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:52:25.147427 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:52:25.147438 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:52:25.147448 | orchestrator | 2026-04-05 02:52:25.147460 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-05 02:52:25.147472 | orchestrator | Sunday 05 April 2026 02:52:05 +0000 (0:00:02.458) 0:00:12.492 ********** 2026-04-05 02:52:25.147485 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-05 02:52:25.147500 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-05 02:52:25.147512 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-05 02:52:25.147524 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-05 02:52:25.147569 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-05 02:52:25.147582 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-05 02:52:25.147603 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 02:53:04.191787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 02:53:04.191928 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 02:53:04.192014 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 02:53:04.192027 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 02:53:04.192039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 02:53:04.192051 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 02:53:04.192070 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 02:53:04.192120 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 02:53:04.192141 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 02:53:04.192160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 02:53:04.192175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-05 02:53:04.192187 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 02:53:04.192199 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 02:53:04.192210 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 02:53:04.192220 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 02:53:04.192232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 02:53:04.192244 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 02:53:04.192262 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 02:53:04.192303 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 02:53:04.192338 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 02:53:04.192358 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 02:53:04.192378 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 02:53:04.192397 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 02:53:04.192413 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 02:53:04.192427 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 02:53:04.192440 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 02:53:04.192453 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 02:53:04.192466 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 02:53:04.192485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 02:53:04.192505 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 02:53:04.192550 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 02:53:04.192569 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 02:53:04.192591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 02:53:04.192609 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 02:53:04.192628 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 02:53:04.192648 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-05 02:53:04.192707 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-05 02:53:04.192723 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-05 02:53:04.192743 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-05 02:53:04.192755 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-05 02:53:04.192765 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-05 02:53:04.192776 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 02:53:04.192787 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 02:53:04.192798 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 02:53:04.192810 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 02:53:04.192828 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 02:53:04.192848 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 02:53:04.192867 | orchestrator | 2026-04-05 02:53:04.192888 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 02:53:04.192907 | orchestrator | Sunday 05 April 2026 02:52:24 +0000 (0:00:18.819) 0:00:31.311 ********** 2026-04-05 02:53:04.192926 | orchestrator | 2026-04-05 02:53:04.192937 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 02:53:04.192948 | orchestrator | Sunday 05 April 2026 02:52:24 +0000 (0:00:00.249) 0:00:31.560 ********** 2026-04-05 02:53:04.192959 | orchestrator | 2026-04-05 02:53:04.192970 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 02:53:04.192980 | orchestrator | Sunday 05 April 2026 02:52:24 +0000 (0:00:00.064) 0:00:31.625 ********** 2026-04-05 02:53:04.192991 | orchestrator | 2026-04-05 02:53:04.193002 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 02:53:04.193012 | orchestrator | Sunday 05 April 2026 02:52:24 +0000 (0:00:00.068) 0:00:31.693 ********** 2026-04-05 02:53:04.193023 | orchestrator | 2026-04-05 02:53:04.193034 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 02:53:04.193045 | orchestrator | Sunday 05 April 2026 02:52:24 +0000 (0:00:00.067) 0:00:31.761 ********** 2026-04-05 02:53:04.193056 | orchestrator | 2026-04-05 02:53:04.193067 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 02:53:04.193078 | orchestrator | Sunday 05 April 2026 02:52:25 +0000 (0:00:00.064) 0:00:31.826 ********** 2026-04-05 02:53:04.193088 | orchestrator | 2026-04-05 02:53:04.193099 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-05 02:53:04.193110 | orchestrator | Sunday 05 April 2026 02:52:25 +0000 (0:00:00.063) 0:00:31.890 ********** 2026-04-05 02:53:04.193121 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:53:04.193133 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:53:04.193144 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:53:04.193155 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:04.193165 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:04.193176 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:04.193187 | orchestrator | 2026-04-05 02:53:04.193198 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-05 02:53:04.193209 | orchestrator | Sunday 05 April 2026 02:52:26 +0000 (0:00:01.550) 0:00:33.440 ********** 2026-04-05 02:53:04.193228 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:53:04.193239 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:53:04.193250 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:53:04.193261 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:53:04.193271 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:53:04.193282 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:53:04.193301 | orchestrator | 2026-04-05 02:53:04.193320 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-05 02:53:04.193337 | orchestrator | 2026-04-05 02:53:04.193356 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 02:53:04.193374 | orchestrator | Sunday 05 April 2026 02:53:01 +0000 (0:00:35.122) 0:01:08.562 ********** 2026-04-05 02:53:04.193393 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:53:04.193412 | orchestrator | 2026-04-05 02:53:04.193429 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 02:53:04.193448 | orchestrator | Sunday 05 April 2026 02:53:02 +0000 (0:00:00.793) 0:01:09.356 ********** 2026-04-05 02:53:04.193467 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:53:04.193485 | orchestrator | 2026-04-05 02:53:04.193502 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-05 02:53:04.193590 | orchestrator | Sunday 05 April 2026 02:53:03 +0000 (0:00:00.606) 0:01:09.963 ********** 2026-04-05 02:53:04.193605 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:04.193616 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:04.193627 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:04.193637 | orchestrator | 2026-04-05 02:53:04.193654 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-05 02:53:04.193683 | orchestrator | Sunday 05 April 2026 02:53:04 +0000 (0:00:00.973) 0:01:10.937 ********** 2026-04-05 02:53:15.976333 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:15.976439 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:15.976457 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:15.976468 | orchestrator | 2026-04-05 02:53:15.976477 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-05 02:53:15.976501 | orchestrator | Sunday 05 April 2026 02:53:04 +0000 (0:00:00.354) 0:01:11.291 ********** 2026-04-05 02:53:15.976576 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:15.976587 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:15.976596 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:15.976605 | orchestrator | 2026-04-05 02:53:15.976614 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-05 02:53:15.976623 | orchestrator | Sunday 05 April 2026 02:53:04 +0000 (0:00:00.365) 0:01:11.657 ********** 2026-04-05 02:53:15.976632 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:15.976641 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:15.976649 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:15.976658 | orchestrator | 2026-04-05 02:53:15.976667 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-05 02:53:15.976676 | orchestrator | Sunday 05 April 2026 02:53:05 +0000 (0:00:00.373) 0:01:12.031 ********** 2026-04-05 02:53:15.976685 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:15.976694 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:15.976702 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:15.976711 | orchestrator | 2026-04-05 02:53:15.976720 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-05 02:53:15.976729 | orchestrator | Sunday 05 April 2026 02:53:05 +0000 (0:00:00.567) 0:01:12.599 ********** 2026-04-05 02:53:15.976738 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.976748 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.976757 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.976766 | orchestrator | 2026-04-05 02:53:15.976775 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-05 02:53:15.976805 | orchestrator | Sunday 05 April 2026 02:53:06 +0000 (0:00:00.338) 0:01:12.937 ********** 2026-04-05 02:53:15.976814 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.976823 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.976832 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.976840 | orchestrator | 2026-04-05 02:53:15.976849 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-05 02:53:15.976858 | orchestrator | Sunday 05 April 2026 02:53:06 +0000 (0:00:00.309) 0:01:13.247 ********** 2026-04-05 02:53:15.976866 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.976875 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.976884 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.976894 | orchestrator | 2026-04-05 02:53:15.976904 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-05 02:53:15.976915 | orchestrator | Sunday 05 April 2026 02:53:06 +0000 (0:00:00.325) 0:01:13.572 ********** 2026-04-05 02:53:15.976925 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.976935 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.976945 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.976955 | orchestrator | 2026-04-05 02:53:15.976965 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-05 02:53:15.976976 | orchestrator | Sunday 05 April 2026 02:53:07 +0000 (0:00:00.288) 0:01:13.861 ********** 2026-04-05 02:53:15.976986 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.976996 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977007 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977016 | orchestrator | 2026-04-05 02:53:15.977026 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-05 02:53:15.977036 | orchestrator | Sunday 05 April 2026 02:53:07 +0000 (0:00:00.526) 0:01:14.387 ********** 2026-04-05 02:53:15.977046 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977056 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977066 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977076 | orchestrator | 2026-04-05 02:53:15.977087 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-05 02:53:15.977097 | orchestrator | Sunday 05 April 2026 02:53:07 +0000 (0:00:00.332) 0:01:14.720 ********** 2026-04-05 02:53:15.977107 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977118 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977128 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977138 | orchestrator | 2026-04-05 02:53:15.977148 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-05 02:53:15.977158 | orchestrator | Sunday 05 April 2026 02:53:08 +0000 (0:00:00.333) 0:01:15.053 ********** 2026-04-05 02:53:15.977168 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977178 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977188 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977199 | orchestrator | 2026-04-05 02:53:15.977209 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-05 02:53:15.977219 | orchestrator | Sunday 05 April 2026 02:53:08 +0000 (0:00:00.315) 0:01:15.369 ********** 2026-04-05 02:53:15.977228 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977236 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977245 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977254 | orchestrator | 2026-04-05 02:53:15.977262 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-05 02:53:15.977271 | orchestrator | Sunday 05 April 2026 02:53:09 +0000 (0:00:00.606) 0:01:15.975 ********** 2026-04-05 02:53:15.977280 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977288 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977297 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977306 | orchestrator | 2026-04-05 02:53:15.977314 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-05 02:53:15.977332 | orchestrator | Sunday 05 April 2026 02:53:09 +0000 (0:00:00.356) 0:01:16.331 ********** 2026-04-05 02:53:15.977341 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977349 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977358 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977366 | orchestrator | 2026-04-05 02:53:15.977375 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-05 02:53:15.977384 | orchestrator | Sunday 05 April 2026 02:53:09 +0000 (0:00:00.309) 0:01:16.641 ********** 2026-04-05 02:53:15.977409 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977426 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977440 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977464 | orchestrator | 2026-04-05 02:53:15.977481 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 02:53:15.977503 | orchestrator | Sunday 05 April 2026 02:53:10 +0000 (0:00:00.378) 0:01:17.020 ********** 2026-04-05 02:53:15.977544 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:53:15.977559 | orchestrator | 2026-04-05 02:53:15.977573 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-05 02:53:15.977588 | orchestrator | Sunday 05 April 2026 02:53:11 +0000 (0:00:00.913) 0:01:17.933 ********** 2026-04-05 02:53:15.977604 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:15.977618 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:15.977632 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:15.977645 | orchestrator | 2026-04-05 02:53:15.977655 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-05 02:53:15.977664 | orchestrator | Sunday 05 April 2026 02:53:11 +0000 (0:00:00.466) 0:01:18.400 ********** 2026-04-05 02:53:15.977672 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:15.977681 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:15.977689 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:15.977698 | orchestrator | 2026-04-05 02:53:15.977707 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-05 02:53:15.977716 | orchestrator | Sunday 05 April 2026 02:53:12 +0000 (0:00:00.470) 0:01:18.871 ********** 2026-04-05 02:53:15.977724 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977733 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977742 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977751 | orchestrator | 2026-04-05 02:53:15.977760 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-05 02:53:15.977769 | orchestrator | Sunday 05 April 2026 02:53:12 +0000 (0:00:00.333) 0:01:19.205 ********** 2026-04-05 02:53:15.977777 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977786 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977795 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977803 | orchestrator | 2026-04-05 02:53:15.977812 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-05 02:53:15.977821 | orchestrator | Sunday 05 April 2026 02:53:12 +0000 (0:00:00.546) 0:01:19.751 ********** 2026-04-05 02:53:15.977830 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977838 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977847 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977856 | orchestrator | 2026-04-05 02:53:15.977865 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-05 02:53:15.977873 | orchestrator | Sunday 05 April 2026 02:53:13 +0000 (0:00:00.334) 0:01:20.086 ********** 2026-04-05 02:53:15.977882 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977891 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977899 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977908 | orchestrator | 2026-04-05 02:53:15.977917 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-05 02:53:15.977926 | orchestrator | Sunday 05 April 2026 02:53:13 +0000 (0:00:00.334) 0:01:20.421 ********** 2026-04-05 02:53:15.977946 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.977955 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.977966 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.977981 | orchestrator | 2026-04-05 02:53:15.977995 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-05 02:53:15.978009 | orchestrator | Sunday 05 April 2026 02:53:13 +0000 (0:00:00.321) 0:01:20.742 ********** 2026-04-05 02:53:15.978098 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:15.978116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:15.978125 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:15.978134 | orchestrator | 2026-04-05 02:53:15.978143 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-05 02:53:15.978152 | orchestrator | Sunday 05 April 2026 02:53:14 +0000 (0:00:00.562) 0:01:21.304 ********** 2026-04-05 02:53:15.978163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:15.978175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:15.978184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:15.978211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286750 | orchestrator | 2026-04-05 02:53:22.286760 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-05 02:53:22.286770 | orchestrator | Sunday 05 April 2026 02:53:15 +0000 (0:00:01.424) 0:01:22.729 ********** 2026-04-05 02:53:22.286780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286901 | orchestrator | 2026-04-05 02:53:22.286909 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-05 02:53:22.286918 | orchestrator | Sunday 05 April 2026 02:53:19 +0000 (0:00:03.828) 0:01:26.557 ********** 2026-04-05 02:53:22.286927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:22.286982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.009094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.009247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.009266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.009295 | orchestrator | 2026-04-05 02:53:47.009307 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 02:53:47.009328 | orchestrator | Sunday 05 April 2026 02:53:21 +0000 (0:00:02.068) 0:01:28.625 ********** 2026-04-05 02:53:47.009338 | orchestrator | 2026-04-05 02:53:47.009348 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 02:53:47.009358 | orchestrator | Sunday 05 April 2026 02:53:21 +0000 (0:00:00.068) 0:01:28.694 ********** 2026-04-05 02:53:47.009367 | orchestrator | 2026-04-05 02:53:47.009377 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 02:53:47.009386 | orchestrator | Sunday 05 April 2026 02:53:22 +0000 (0:00:00.274) 0:01:28.968 ********** 2026-04-05 02:53:47.009396 | orchestrator | 2026-04-05 02:53:47.009405 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-05 02:53:47.009415 | orchestrator | Sunday 05 April 2026 02:53:22 +0000 (0:00:00.066) 0:01:29.034 ********** 2026-04-05 02:53:47.009425 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:53:47.009436 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:53:47.009446 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:53:47.009456 | orchestrator | 2026-04-05 02:53:47.009465 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-05 02:53:47.009475 | orchestrator | Sunday 05 April 2026 02:53:29 +0000 (0:00:07.574) 0:01:36.609 ********** 2026-04-05 02:53:47.009485 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:53:47.009495 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:53:47.009504 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:53:47.009538 | orchestrator | 2026-04-05 02:53:47.009548 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-05 02:53:47.009558 | orchestrator | Sunday 05 April 2026 02:53:37 +0000 (0:00:07.324) 0:01:43.933 ********** 2026-04-05 02:53:47.009568 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:53:47.009577 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:53:47.009587 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:53:47.009596 | orchestrator | 2026-04-05 02:53:47.009606 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-05 02:53:47.009616 | orchestrator | Sunday 05 April 2026 02:53:39 +0000 (0:00:02.540) 0:01:46.474 ********** 2026-04-05 02:53:47.009625 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:53:47.009635 | orchestrator | 2026-04-05 02:53:47.009645 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-05 02:53:47.009655 | orchestrator | Sunday 05 April 2026 02:53:39 +0000 (0:00:00.126) 0:01:46.600 ********** 2026-04-05 02:53:47.009664 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:47.009675 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:47.009685 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:47.009694 | orchestrator | 2026-04-05 02:53:47.009704 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-05 02:53:47.009714 | orchestrator | Sunday 05 April 2026 02:53:40 +0000 (0:00:01.100) 0:01:47.700 ********** 2026-04-05 02:53:47.009734 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:47.009762 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:47.009772 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:53:47.009782 | orchestrator | 2026-04-05 02:53:47.009792 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-05 02:53:47.009801 | orchestrator | Sunday 05 April 2026 02:53:41 +0000 (0:00:00.632) 0:01:48.333 ********** 2026-04-05 02:53:47.009811 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:47.009821 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:47.009830 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:47.009840 | orchestrator | 2026-04-05 02:53:47.009849 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-05 02:53:47.009874 | orchestrator | Sunday 05 April 2026 02:53:42 +0000 (0:00:00.804) 0:01:49.137 ********** 2026-04-05 02:53:47.009884 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:53:47.009893 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:53:47.009903 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:53:47.009912 | orchestrator | 2026-04-05 02:53:47.009922 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-05 02:53:47.009932 | orchestrator | Sunday 05 April 2026 02:53:43 +0000 (0:00:00.717) 0:01:49.854 ********** 2026-04-05 02:53:47.009942 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:47.009951 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:47.009980 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:47.009990 | orchestrator | 2026-04-05 02:53:47.010000 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-05 02:53:47.010011 | orchestrator | Sunday 05 April 2026 02:53:44 +0000 (0:00:01.217) 0:01:51.072 ********** 2026-04-05 02:53:47.010081 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:47.010091 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:47.010101 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:47.010110 | orchestrator | 2026-04-05 02:53:47.010120 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-05 02:53:47.010130 | orchestrator | Sunday 05 April 2026 02:53:45 +0000 (0:00:00.775) 0:01:51.847 ********** 2026-04-05 02:53:47.010140 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:53:47.010149 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:53:47.010159 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:53:47.010168 | orchestrator | 2026-04-05 02:53:47.010178 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-05 02:53:47.010187 | orchestrator | Sunday 05 April 2026 02:53:45 +0000 (0:00:00.328) 0:01:52.176 ********** 2026-04-05 02:53:47.010199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010211 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010231 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010249 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010259 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010269 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:47.010311 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654445 | orchestrator | 2026-04-05 02:53:54.654715 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-05 02:53:54.654748 | orchestrator | Sunday 05 April 2026 02:53:46 +0000 (0:00:01.576) 0:01:53.753 ********** 2026-04-05 02:53:54.654769 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654790 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654809 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654828 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654937 | orchestrator | 2026-04-05 02:53:54.654947 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-05 02:53:54.654957 | orchestrator | Sunday 05 April 2026 02:53:51 +0000 (0:00:04.230) 0:01:57.984 ********** 2026-04-05 02:53:54.654987 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.654998 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655008 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655018 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655057 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 02:53:54.655092 | orchestrator | 2026-04-05 02:53:54.655102 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 02:53:54.655112 | orchestrator | Sunday 05 April 2026 02:53:54 +0000 (0:00:03.184) 0:02:01.168 ********** 2026-04-05 02:53:54.655121 | orchestrator | 2026-04-05 02:53:54.655131 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 02:53:54.655140 | orchestrator | Sunday 05 April 2026 02:53:54 +0000 (0:00:00.062) 0:02:01.231 ********** 2026-04-05 02:53:54.655150 | orchestrator | 2026-04-05 02:53:54.655159 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 02:53:54.655169 | orchestrator | Sunday 05 April 2026 02:53:54 +0000 (0:00:00.071) 0:02:01.302 ********** 2026-04-05 02:53:54.655178 | orchestrator | 2026-04-05 02:53:54.655195 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-05 02:54:19.426271 | orchestrator | Sunday 05 April 2026 02:53:54 +0000 (0:00:00.088) 0:02:01.391 ********** 2026-04-05 02:54:19.426379 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:54:19.426393 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:54:19.426403 | orchestrator | 2026-04-05 02:54:19.426412 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-05 02:54:19.426422 | orchestrator | Sunday 05 April 2026 02:54:00 +0000 (0:00:06.293) 0:02:07.685 ********** 2026-04-05 02:54:19.426431 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:54:19.426440 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:54:19.426449 | orchestrator | 2026-04-05 02:54:19.426458 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-05 02:54:19.426489 | orchestrator | Sunday 05 April 2026 02:54:07 +0000 (0:00:06.289) 0:02:13.975 ********** 2026-04-05 02:54:19.426498 | orchestrator | changed: [testbed-node-1] 2026-04-05 02:54:19.426570 | orchestrator | changed: [testbed-node-2] 2026-04-05 02:54:19.426580 | orchestrator | 2026-04-05 02:54:19.426588 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-05 02:54:19.426597 | orchestrator | Sunday 05 April 2026 02:54:13 +0000 (0:00:06.197) 0:02:20.172 ********** 2026-04-05 02:54:19.426606 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:54:19.426614 | orchestrator | 2026-04-05 02:54:19.426623 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-05 02:54:19.426632 | orchestrator | Sunday 05 April 2026 02:54:13 +0000 (0:00:00.180) 0:02:20.353 ********** 2026-04-05 02:54:19.426641 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:54:19.426650 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:54:19.426659 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:54:19.426667 | orchestrator | 2026-04-05 02:54:19.426676 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-05 02:54:19.426685 | orchestrator | Sunday 05 April 2026 02:54:14 +0000 (0:00:01.170) 0:02:21.523 ********** 2026-04-05 02:54:19.426693 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:54:19.426702 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:54:19.426710 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:54:19.426719 | orchestrator | 2026-04-05 02:54:19.426728 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-05 02:54:19.426736 | orchestrator | Sunday 05 April 2026 02:54:15 +0000 (0:00:00.643) 0:02:22.167 ********** 2026-04-05 02:54:19.426746 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:54:19.426754 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:54:19.426763 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:54:19.426771 | orchestrator | 2026-04-05 02:54:19.426780 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-05 02:54:19.426789 | orchestrator | Sunday 05 April 2026 02:54:16 +0000 (0:00:00.819) 0:02:22.986 ********** 2026-04-05 02:54:19.426797 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:54:19.426806 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:54:19.426816 | orchestrator | changed: [testbed-node-0] 2026-04-05 02:54:19.426826 | orchestrator | 2026-04-05 02:54:19.426837 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-05 02:54:19.426847 | orchestrator | Sunday 05 April 2026 02:54:16 +0000 (0:00:00.671) 0:02:23.658 ********** 2026-04-05 02:54:19.426857 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:54:19.426867 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:54:19.426877 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:54:19.426887 | orchestrator | 2026-04-05 02:54:19.426897 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-05 02:54:19.426907 | orchestrator | Sunday 05 April 2026 02:54:18 +0000 (0:00:01.183) 0:02:24.841 ********** 2026-04-05 02:54:19.426917 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:54:19.426927 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:54:19.426937 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:54:19.426947 | orchestrator | 2026-04-05 02:54:19.426957 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:54:19.426968 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 02:54:19.426979 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-05 02:54:19.426990 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-05 02:54:19.427000 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:54:19.427018 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:54:19.427028 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 02:54:19.427038 | orchestrator | 2026-04-05 02:54:19.427048 | orchestrator | 2026-04-05 02:54:19.427072 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:54:19.427083 | orchestrator | Sunday 05 April 2026 02:54:18 +0000 (0:00:00.926) 0:02:25.767 ********** 2026-04-05 02:54:19.427093 | orchestrator | =============================================================================== 2026-04-05 02:54:19.427103 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.12s 2026-04-05 02:54:19.427114 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.82s 2026-04-05 02:54:19.427124 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.87s 2026-04-05 02:54:19.427133 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.61s 2026-04-05 02:54:19.427144 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.74s 2026-04-05 02:54:19.427169 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.23s 2026-04-05 02:54:19.427181 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.83s 2026-04-05 02:54:19.427191 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.18s 2026-04-05 02:54:19.427201 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.46s 2026-04-05 02:54:19.427210 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.07s 2026-04-05 02:54:19.427218 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2026-04-05 02:54:19.427227 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.56s 2026-04-05 02:54:19.427235 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.55s 2026-04-05 02:54:19.427244 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2026-04-05 02:54:19.427252 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-04-05 02:54:19.427261 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.40s 2026-04-05 02:54:19.427269 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.26s 2026-04-05 02:54:19.427278 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.22s 2026-04-05 02:54:19.427286 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-04-05 02:54:19.427295 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.18s 2026-04-05 02:54:19.776000 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 02:54:19.776085 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-04-05 02:54:22.253039 | orchestrator | 2026-04-05 02:54:22 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-05 02:54:32.404979 | orchestrator | 2026-04-05 02:54:32 | INFO  | Task 7bdf0f22-b57f-4438-a804-ac0cd3058d81 (wipe-partitions) was prepared for execution. 2026-04-05 02:54:32.405087 | orchestrator | 2026-04-05 02:54:32 | INFO  | It takes a moment until task 7bdf0f22-b57f-4438-a804-ac0cd3058d81 (wipe-partitions) has been started and output is visible here. 2026-04-05 02:54:45.419672 | orchestrator | 2026-04-05 02:54:45.419785 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-05 02:54:45.419801 | orchestrator | 2026-04-05 02:54:45.419811 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-05 02:54:45.419822 | orchestrator | Sunday 05 April 2026 02:54:36 +0000 (0:00:00.146) 0:00:00.146 ********** 2026-04-05 02:54:45.419857 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:54:45.419869 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:54:45.419877 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:54:45.419886 | orchestrator | 2026-04-05 02:54:45.419896 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-05 02:54:45.419907 | orchestrator | Sunday 05 April 2026 02:54:37 +0000 (0:00:00.590) 0:00:00.736 ********** 2026-04-05 02:54:45.419916 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:54:45.419925 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:54:45.419934 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:54:45.419943 | orchestrator | 2026-04-05 02:54:45.419952 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-05 02:54:45.419963 | orchestrator | Sunday 05 April 2026 02:54:37 +0000 (0:00:00.387) 0:00:01.124 ********** 2026-04-05 02:54:45.419972 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:54:45.419982 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:54:45.419992 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:54:45.420001 | orchestrator | 2026-04-05 02:54:45.420010 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-05 02:54:45.420019 | orchestrator | Sunday 05 April 2026 02:54:38 +0000 (0:00:00.594) 0:00:01.718 ********** 2026-04-05 02:54:45.420028 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:54:45.420037 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:54:45.420048 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:54:45.420058 | orchestrator | 2026-04-05 02:54:45.420068 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-05 02:54:45.420078 | orchestrator | Sunday 05 April 2026 02:54:38 +0000 (0:00:00.258) 0:00:01.977 ********** 2026-04-05 02:54:45.420088 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 02:54:45.420099 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 02:54:45.420109 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 02:54:45.420119 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 02:54:45.420129 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 02:54:45.420140 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 02:54:45.420167 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 02:54:45.420179 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 02:54:45.420188 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 02:54:45.420198 | orchestrator | 2026-04-05 02:54:45.420209 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-05 02:54:45.420219 | orchestrator | Sunday 05 April 2026 02:54:39 +0000 (0:00:01.287) 0:00:03.264 ********** 2026-04-05 02:54:45.420231 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 02:54:45.420242 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 02:54:45.420252 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 02:54:45.420262 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 02:54:45.420273 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 02:54:45.420283 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 02:54:45.420294 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 02:54:45.420305 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 02:54:45.420316 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 02:54:45.420325 | orchestrator | 2026-04-05 02:54:45.420334 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-05 02:54:45.420345 | orchestrator | Sunday 05 April 2026 02:54:41 +0000 (0:00:01.634) 0:00:04.899 ********** 2026-04-05 02:54:45.420357 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-05 02:54:45.420368 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-05 02:54:45.420378 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-05 02:54:45.420388 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-05 02:54:45.420409 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-05 02:54:45.420418 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-05 02:54:45.420429 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-05 02:54:45.420439 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-05 02:54:45.420449 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-05 02:54:45.420458 | orchestrator | 2026-04-05 02:54:45.420465 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-05 02:54:45.420471 | orchestrator | Sunday 05 April 2026 02:54:43 +0000 (0:00:02.125) 0:00:07.024 ********** 2026-04-05 02:54:45.420477 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:54:45.420484 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:54:45.420490 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:54:45.420529 | orchestrator | 2026-04-05 02:54:45.420536 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-05 02:54:45.420542 | orchestrator | Sunday 05 April 2026 02:54:44 +0000 (0:00:00.701) 0:00:07.726 ********** 2026-04-05 02:54:45.420548 | orchestrator | changed: [testbed-node-3] 2026-04-05 02:54:45.420554 | orchestrator | changed: [testbed-node-5] 2026-04-05 02:54:45.420561 | orchestrator | changed: [testbed-node-4] 2026-04-05 02:54:45.420567 | orchestrator | 2026-04-05 02:54:45.420573 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:54:45.420580 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:54:45.420589 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:54:45.420613 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:54:45.420620 | orchestrator | 2026-04-05 02:54:45.420626 | orchestrator | 2026-04-05 02:54:45.420632 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:54:45.420639 | orchestrator | Sunday 05 April 2026 02:54:45 +0000 (0:00:00.669) 0:00:08.395 ********** 2026-04-05 02:54:45.420645 | orchestrator | =============================================================================== 2026-04-05 02:54:45.420651 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2026-04-05 02:54:45.420657 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.63s 2026-04-05 02:54:45.420663 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2026-04-05 02:54:45.420669 | orchestrator | Reload udev rules ------------------------------------------------------- 0.70s 2026-04-05 02:54:45.420676 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2026-04-05 02:54:45.420682 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-04-05 02:54:45.420688 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-04-05 02:54:45.420694 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2026-04-05 02:54:45.420700 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-04-05 02:54:58.058407 | orchestrator | 2026-04-05 02:54:58 | INFO  | Task 48ff77b9-b869-42e0-89bd-42312c51243f (facts) was prepared for execution. 2026-04-05 02:54:58.058617 | orchestrator | 2026-04-05 02:54:58 | INFO  | It takes a moment until task 48ff77b9-b869-42e0-89bd-42312c51243f (facts) has been started and output is visible here. 2026-04-05 02:55:11.731419 | orchestrator | 2026-04-05 02:55:11.731625 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 02:55:11.731666 | orchestrator | 2026-04-05 02:55:11.731686 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 02:55:11.731704 | orchestrator | Sunday 05 April 2026 02:55:02 +0000 (0:00:00.298) 0:00:00.298 ********** 2026-04-05 02:55:11.731757 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:55:11.731778 | orchestrator | ok: [testbed-manager] 2026-04-05 02:55:11.731795 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:55:11.731812 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:55:11.731828 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:55:11.731846 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:55:11.731865 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:55:11.731885 | orchestrator | 2026-04-05 02:55:11.731903 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 02:55:11.731921 | orchestrator | Sunday 05 April 2026 02:55:03 +0000 (0:00:01.216) 0:00:01.515 ********** 2026-04-05 02:55:11.731935 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:55:11.731949 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:55:11.731962 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:55:11.731975 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:55:11.731988 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:11.732001 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:11.732013 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:55:11.732026 | orchestrator | 2026-04-05 02:55:11.732039 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 02:55:11.732052 | orchestrator | 2026-04-05 02:55:11.732066 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 02:55:11.732079 | orchestrator | Sunday 05 April 2026 02:55:05 +0000 (0:00:01.334) 0:00:02.849 ********** 2026-04-05 02:55:11.732092 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:55:11.732105 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:55:11.732119 | orchestrator | ok: [testbed-manager] 2026-04-05 02:55:11.732132 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:55:11.732145 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:55:11.732158 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:55:11.732170 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:55:11.732183 | orchestrator | 2026-04-05 02:55:11.732225 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 02:55:11.732238 | orchestrator | 2026-04-05 02:55:11.732251 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 02:55:11.732264 | orchestrator | Sunday 05 April 2026 02:55:10 +0000 (0:00:05.455) 0:00:08.305 ********** 2026-04-05 02:55:11.732277 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:55:11.732291 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:55:11.732303 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:55:11.732314 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:55:11.732325 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:11.732335 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:11.732346 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:55:11.732357 | orchestrator | 2026-04-05 02:55:11.732368 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:55:11.732379 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:55:11.732437 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:55:11.732450 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:55:11.732461 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:55:11.732472 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:55:11.732509 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:55:11.732534 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:55:11.732545 | orchestrator | 2026-04-05 02:55:11.732555 | orchestrator | 2026-04-05 02:55:11.732566 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:55:11.732577 | orchestrator | Sunday 05 April 2026 02:55:11 +0000 (0:00:00.582) 0:00:08.888 ********** 2026-04-05 02:55:11.732588 | orchestrator | =============================================================================== 2026-04-05 02:55:11.732599 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.46s 2026-04-05 02:55:11.732610 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-04-05 02:55:11.732620 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-04-05 02:55:11.732631 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-04-05 02:55:14.213315 | orchestrator | 2026-04-05 02:55:14 | INFO  | Task 5bc5dab9-56c3-4233-a327-f10291bc98f4 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-05 02:55:14.213442 | orchestrator | 2026-04-05 02:55:14 | INFO  | It takes a moment until task 5bc5dab9-56c3-4233-a327-f10291bc98f4 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-05 02:55:27.307276 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 02:55:27.307368 | orchestrator | 2.16.14 2026-04-05 02:55:27.307378 | orchestrator | 2026-04-05 02:55:27.307386 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 02:55:27.307394 | orchestrator | 2026-04-05 02:55:27.307400 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 02:55:27.307407 | orchestrator | Sunday 05 April 2026 02:55:19 +0000 (0:00:00.376) 0:00:00.376 ********** 2026-04-05 02:55:27.307414 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 02:55:27.307421 | orchestrator | 2026-04-05 02:55:27.307440 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 02:55:27.307446 | orchestrator | Sunday 05 April 2026 02:55:19 +0000 (0:00:00.298) 0:00:00.675 ********** 2026-04-05 02:55:27.307453 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:55:27.307459 | orchestrator | 2026-04-05 02:55:27.307466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307472 | orchestrator | Sunday 05 April 2026 02:55:19 +0000 (0:00:00.255) 0:00:00.930 ********** 2026-04-05 02:55:27.307541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 02:55:27.307550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 02:55:27.307557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 02:55:27.307563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 02:55:27.307570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 02:55:27.307580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 02:55:27.307591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 02:55:27.307601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 02:55:27.307613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 02:55:27.307628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 02:55:27.307638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 02:55:27.307648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 02:55:27.307684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 02:55:27.307695 | orchestrator | 2026-04-05 02:55:27.307704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307715 | orchestrator | Sunday 05 April 2026 02:55:20 +0000 (0:00:00.501) 0:00:01.431 ********** 2026-04-05 02:55:27.307726 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307736 | orchestrator | 2026-04-05 02:55:27.307746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307756 | orchestrator | Sunday 05 April 2026 02:55:20 +0000 (0:00:00.221) 0:00:01.652 ********** 2026-04-05 02:55:27.307767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307776 | orchestrator | 2026-04-05 02:55:27.307786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307796 | orchestrator | Sunday 05 April 2026 02:55:20 +0000 (0:00:00.261) 0:00:01.914 ********** 2026-04-05 02:55:27.307806 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307817 | orchestrator | 2026-04-05 02:55:27.307828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307838 | orchestrator | Sunday 05 April 2026 02:55:20 +0000 (0:00:00.260) 0:00:02.174 ********** 2026-04-05 02:55:27.307845 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307852 | orchestrator | 2026-04-05 02:55:27.307860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307867 | orchestrator | Sunday 05 April 2026 02:55:21 +0000 (0:00:00.232) 0:00:02.407 ********** 2026-04-05 02:55:27.307874 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307882 | orchestrator | 2026-04-05 02:55:27.307889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307896 | orchestrator | Sunday 05 April 2026 02:55:21 +0000 (0:00:00.219) 0:00:02.626 ********** 2026-04-05 02:55:27.307903 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307911 | orchestrator | 2026-04-05 02:55:27.307918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307924 | orchestrator | Sunday 05 April 2026 02:55:21 +0000 (0:00:00.221) 0:00:02.847 ********** 2026-04-05 02:55:27.307931 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307939 | orchestrator | 2026-04-05 02:55:27.307946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307953 | orchestrator | Sunday 05 April 2026 02:55:21 +0000 (0:00:00.243) 0:00:03.091 ********** 2026-04-05 02:55:27.307960 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.307968 | orchestrator | 2026-04-05 02:55:27.307979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.307991 | orchestrator | Sunday 05 April 2026 02:55:21 +0000 (0:00:00.212) 0:00:03.303 ********** 2026-04-05 02:55:27.308007 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007) 2026-04-05 02:55:27.308018 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007) 2026-04-05 02:55:27.308027 | orchestrator | 2026-04-05 02:55:27.308037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.308066 | orchestrator | Sunday 05 April 2026 02:55:22 +0000 (0:00:00.442) 0:00:03.746 ********** 2026-04-05 02:55:27.308077 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51) 2026-04-05 02:55:27.308088 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51) 2026-04-05 02:55:27.308098 | orchestrator | 2026-04-05 02:55:27.308108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.308119 | orchestrator | Sunday 05 April 2026 02:55:23 +0000 (0:00:00.753) 0:00:04.499 ********** 2026-04-05 02:55:27.308133 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6) 2026-04-05 02:55:27.308149 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6) 2026-04-05 02:55:27.308155 | orchestrator | 2026-04-05 02:55:27.308161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.308167 | orchestrator | Sunday 05 April 2026 02:55:23 +0000 (0:00:00.760) 0:00:05.260 ********** 2026-04-05 02:55:27.308173 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22) 2026-04-05 02:55:27.308180 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22) 2026-04-05 02:55:27.308186 | orchestrator | 2026-04-05 02:55:27.308193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:27.308200 | orchestrator | Sunday 05 April 2026 02:55:24 +0000 (0:00:00.987) 0:00:06.247 ********** 2026-04-05 02:55:27.308207 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 02:55:27.308215 | orchestrator | 2026-04-05 02:55:27.308222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308230 | orchestrator | Sunday 05 April 2026 02:55:25 +0000 (0:00:00.368) 0:00:06.615 ********** 2026-04-05 02:55:27.308237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 02:55:27.308244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 02:55:27.308251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 02:55:27.308258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 02:55:27.308265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 02:55:27.308272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 02:55:27.308279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 02:55:27.308286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 02:55:27.308293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 02:55:27.308300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 02:55:27.308307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 02:55:27.308315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 02:55:27.308322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 02:55:27.308329 | orchestrator | 2026-04-05 02:55:27.308336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308343 | orchestrator | Sunday 05 April 2026 02:55:25 +0000 (0:00:00.423) 0:00:07.039 ********** 2026-04-05 02:55:27.308350 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.308357 | orchestrator | 2026-04-05 02:55:27.308364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308372 | orchestrator | Sunday 05 April 2026 02:55:25 +0000 (0:00:00.246) 0:00:07.286 ********** 2026-04-05 02:55:27.308381 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.308393 | orchestrator | 2026-04-05 02:55:27.308412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308425 | orchestrator | Sunday 05 April 2026 02:55:26 +0000 (0:00:00.220) 0:00:07.507 ********** 2026-04-05 02:55:27.308436 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.308448 | orchestrator | 2026-04-05 02:55:27.308459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308471 | orchestrator | Sunday 05 April 2026 02:55:26 +0000 (0:00:00.246) 0:00:07.753 ********** 2026-04-05 02:55:27.308543 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.308558 | orchestrator | 2026-04-05 02:55:27.308572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308584 | orchestrator | Sunday 05 April 2026 02:55:26 +0000 (0:00:00.208) 0:00:07.961 ********** 2026-04-05 02:55:27.308596 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.308604 | orchestrator | 2026-04-05 02:55:27.308612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308619 | orchestrator | Sunday 05 April 2026 02:55:26 +0000 (0:00:00.234) 0:00:08.196 ********** 2026-04-05 02:55:27.308626 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.308633 | orchestrator | 2026-04-05 02:55:27.308640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:27.308647 | orchestrator | Sunday 05 April 2026 02:55:27 +0000 (0:00:00.239) 0:00:08.435 ********** 2026-04-05 02:55:27.308654 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:27.308661 | orchestrator | 2026-04-05 02:55:27.308675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:35.780805 | orchestrator | Sunday 05 April 2026 02:55:27 +0000 (0:00:00.227) 0:00:08.662 ********** 2026-04-05 02:55:35.780910 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.780927 | orchestrator | 2026-04-05 02:55:35.780939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:35.780951 | orchestrator | Sunday 05 April 2026 02:55:27 +0000 (0:00:00.214) 0:00:08.877 ********** 2026-04-05 02:55:35.780962 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 02:55:35.780974 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 02:55:35.780986 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 02:55:35.781030 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 02:55:35.781043 | orchestrator | 2026-04-05 02:55:35.781066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:35.781078 | orchestrator | Sunday 05 April 2026 02:55:28 +0000 (0:00:01.172) 0:00:10.050 ********** 2026-04-05 02:55:35.781089 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781100 | orchestrator | 2026-04-05 02:55:35.781110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:35.781122 | orchestrator | Sunday 05 April 2026 02:55:28 +0000 (0:00:00.277) 0:00:10.327 ********** 2026-04-05 02:55:35.781132 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781143 | orchestrator | 2026-04-05 02:55:35.781154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:35.781165 | orchestrator | Sunday 05 April 2026 02:55:29 +0000 (0:00:00.239) 0:00:10.567 ********** 2026-04-05 02:55:35.781176 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781187 | orchestrator | 2026-04-05 02:55:35.781198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:35.781208 | orchestrator | Sunday 05 April 2026 02:55:29 +0000 (0:00:00.230) 0:00:10.798 ********** 2026-04-05 02:55:35.781219 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781230 | orchestrator | 2026-04-05 02:55:35.781241 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 02:55:35.781251 | orchestrator | Sunday 05 April 2026 02:55:29 +0000 (0:00:00.262) 0:00:11.060 ********** 2026-04-05 02:55:35.781262 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-05 02:55:35.781273 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-05 02:55:35.781284 | orchestrator | 2026-04-05 02:55:35.781295 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 02:55:35.781314 | orchestrator | Sunday 05 April 2026 02:55:29 +0000 (0:00:00.222) 0:00:11.283 ********** 2026-04-05 02:55:35.781332 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781352 | orchestrator | 2026-04-05 02:55:35.781371 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 02:55:35.781390 | orchestrator | Sunday 05 April 2026 02:55:30 +0000 (0:00:00.148) 0:00:11.432 ********** 2026-04-05 02:55:35.781438 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781458 | orchestrator | 2026-04-05 02:55:35.781504 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 02:55:35.781526 | orchestrator | Sunday 05 April 2026 02:55:30 +0000 (0:00:00.148) 0:00:11.580 ********** 2026-04-05 02:55:35.781545 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781563 | orchestrator | 2026-04-05 02:55:35.781582 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 02:55:35.781600 | orchestrator | Sunday 05 April 2026 02:55:30 +0000 (0:00:00.154) 0:00:11.734 ********** 2026-04-05 02:55:35.781620 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:55:35.781639 | orchestrator | 2026-04-05 02:55:35.781658 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 02:55:35.781679 | orchestrator | Sunday 05 April 2026 02:55:30 +0000 (0:00:00.162) 0:00:11.896 ********** 2026-04-05 02:55:35.781698 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14998b-6337-5d33-8563-647c08b40df2'}}) 2026-04-05 02:55:35.781717 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4671660f-3880-5125-9575-24d25698498a'}}) 2026-04-05 02:55:35.781729 | orchestrator | 2026-04-05 02:55:35.781740 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 02:55:35.781751 | orchestrator | Sunday 05 April 2026 02:55:30 +0000 (0:00:00.194) 0:00:12.090 ********** 2026-04-05 02:55:35.781762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14998b-6337-5d33-8563-647c08b40df2'}})  2026-04-05 02:55:35.781775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4671660f-3880-5125-9575-24d25698498a'}})  2026-04-05 02:55:35.781786 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781796 | orchestrator | 2026-04-05 02:55:35.781807 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 02:55:35.781818 | orchestrator | Sunday 05 April 2026 02:55:31 +0000 (0:00:00.365) 0:00:12.456 ********** 2026-04-05 02:55:35.781828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14998b-6337-5d33-8563-647c08b40df2'}})  2026-04-05 02:55:35.781840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4671660f-3880-5125-9575-24d25698498a'}})  2026-04-05 02:55:35.781850 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781861 | orchestrator | 2026-04-05 02:55:35.781872 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 02:55:35.781882 | orchestrator | Sunday 05 April 2026 02:55:31 +0000 (0:00:00.161) 0:00:12.617 ********** 2026-04-05 02:55:35.781893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14998b-6337-5d33-8563-647c08b40df2'}})  2026-04-05 02:55:35.781923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4671660f-3880-5125-9575-24d25698498a'}})  2026-04-05 02:55:35.781934 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.781945 | orchestrator | 2026-04-05 02:55:35.781957 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 02:55:35.781968 | orchestrator | Sunday 05 April 2026 02:55:31 +0000 (0:00:00.184) 0:00:12.802 ********** 2026-04-05 02:55:35.781979 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:55:35.781989 | orchestrator | 2026-04-05 02:55:35.782000 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 02:55:35.782073 | orchestrator | Sunday 05 April 2026 02:55:31 +0000 (0:00:00.157) 0:00:12.959 ********** 2026-04-05 02:55:35.782086 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:55:35.782098 | orchestrator | 2026-04-05 02:55:35.782108 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 02:55:35.782119 | orchestrator | Sunday 05 April 2026 02:55:31 +0000 (0:00:00.160) 0:00:13.120 ********** 2026-04-05 02:55:35.782142 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.782153 | orchestrator | 2026-04-05 02:55:35.782164 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 02:55:35.782175 | orchestrator | Sunday 05 April 2026 02:55:31 +0000 (0:00:00.149) 0:00:13.270 ********** 2026-04-05 02:55:35.782185 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.782196 | orchestrator | 2026-04-05 02:55:35.782207 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 02:55:35.782218 | orchestrator | Sunday 05 April 2026 02:55:32 +0000 (0:00:00.145) 0:00:13.415 ********** 2026-04-05 02:55:35.782228 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.782239 | orchestrator | 2026-04-05 02:55:35.782250 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 02:55:35.782261 | orchestrator | Sunday 05 April 2026 02:55:32 +0000 (0:00:00.141) 0:00:13.557 ********** 2026-04-05 02:55:35.782271 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 02:55:35.782283 | orchestrator |  "ceph_osd_devices": { 2026-04-05 02:55:35.782294 | orchestrator |  "sdb": { 2026-04-05 02:55:35.782305 | orchestrator |  "osd_lvm_uuid": "2b14998b-6337-5d33-8563-647c08b40df2" 2026-04-05 02:55:35.782317 | orchestrator |  }, 2026-04-05 02:55:35.782328 | orchestrator |  "sdc": { 2026-04-05 02:55:35.782339 | orchestrator |  "osd_lvm_uuid": "4671660f-3880-5125-9575-24d25698498a" 2026-04-05 02:55:35.782350 | orchestrator |  } 2026-04-05 02:55:35.782361 | orchestrator |  } 2026-04-05 02:55:35.782372 | orchestrator | } 2026-04-05 02:55:35.782383 | orchestrator | 2026-04-05 02:55:35.782393 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 02:55:35.782404 | orchestrator | Sunday 05 April 2026 02:55:32 +0000 (0:00:00.158) 0:00:13.715 ********** 2026-04-05 02:55:35.782415 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.782426 | orchestrator | 2026-04-05 02:55:35.782436 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 02:55:35.782447 | orchestrator | Sunday 05 April 2026 02:55:32 +0000 (0:00:00.171) 0:00:13.887 ********** 2026-04-05 02:55:35.782458 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.782468 | orchestrator | 2026-04-05 02:55:35.782513 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 02:55:35.782525 | orchestrator | Sunday 05 April 2026 02:55:32 +0000 (0:00:00.140) 0:00:14.027 ********** 2026-04-05 02:55:35.782535 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:55:35.782546 | orchestrator | 2026-04-05 02:55:35.782557 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 02:55:35.782567 | orchestrator | Sunday 05 April 2026 02:55:32 +0000 (0:00:00.137) 0:00:14.164 ********** 2026-04-05 02:55:35.782578 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 02:55:35.782589 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 02:55:35.782600 | orchestrator |  "ceph_osd_devices": { 2026-04-05 02:55:35.782611 | orchestrator |  "sdb": { 2026-04-05 02:55:35.782622 | orchestrator |  "osd_lvm_uuid": "2b14998b-6337-5d33-8563-647c08b40df2" 2026-04-05 02:55:35.782633 | orchestrator |  }, 2026-04-05 02:55:35.782644 | orchestrator |  "sdc": { 2026-04-05 02:55:35.782655 | orchestrator |  "osd_lvm_uuid": "4671660f-3880-5125-9575-24d25698498a" 2026-04-05 02:55:35.782666 | orchestrator |  } 2026-04-05 02:55:35.782677 | orchestrator |  }, 2026-04-05 02:55:35.782687 | orchestrator |  "lvm_volumes": [ 2026-04-05 02:55:35.782698 | orchestrator |  { 2026-04-05 02:55:35.782710 | orchestrator |  "data": "osd-block-2b14998b-6337-5d33-8563-647c08b40df2", 2026-04-05 02:55:35.782720 | orchestrator |  "data_vg": "ceph-2b14998b-6337-5d33-8563-647c08b40df2" 2026-04-05 02:55:35.782731 | orchestrator |  }, 2026-04-05 02:55:35.782742 | orchestrator |  { 2026-04-05 02:55:35.782752 | orchestrator |  "data": "osd-block-4671660f-3880-5125-9575-24d25698498a", 2026-04-05 02:55:35.782771 | orchestrator |  "data_vg": "ceph-4671660f-3880-5125-9575-24d25698498a" 2026-04-05 02:55:35.782786 | orchestrator |  } 2026-04-05 02:55:35.782804 | orchestrator |  ] 2026-04-05 02:55:35.782823 | orchestrator |  } 2026-04-05 02:55:35.782841 | orchestrator | } 2026-04-05 02:55:35.782859 | orchestrator | 2026-04-05 02:55:35.782877 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 02:55:35.782896 | orchestrator | Sunday 05 April 2026 02:55:33 +0000 (0:00:00.478) 0:00:14.643 ********** 2026-04-05 02:55:35.782914 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 02:55:35.782933 | orchestrator | 2026-04-05 02:55:35.782952 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 02:55:35.782971 | orchestrator | 2026-04-05 02:55:35.782985 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 02:55:35.782996 | orchestrator | Sunday 05 April 2026 02:55:35 +0000 (0:00:01.947) 0:00:16.591 ********** 2026-04-05 02:55:35.783007 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 02:55:35.783018 | orchestrator | 2026-04-05 02:55:35.783029 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 02:55:35.783039 | orchestrator | Sunday 05 April 2026 02:55:35 +0000 (0:00:00.293) 0:00:16.885 ********** 2026-04-05 02:55:35.783050 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:55:35.783061 | orchestrator | 2026-04-05 02:55:35.783082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.479825 | orchestrator | Sunday 05 April 2026 02:55:35 +0000 (0:00:00.256) 0:00:17.141 ********** 2026-04-05 02:55:44.479956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 02:55:44.479977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 02:55:44.479993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 02:55:44.480029 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 02:55:44.480048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 02:55:44.480065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 02:55:44.480082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 02:55:44.480097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 02:55:44.480113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 02:55:44.480129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 02:55:44.480145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 02:55:44.480160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 02:55:44.480176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 02:55:44.480191 | orchestrator | 2026-04-05 02:55:44.480208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480224 | orchestrator | Sunday 05 April 2026 02:55:36 +0000 (0:00:00.398) 0:00:17.540 ********** 2026-04-05 02:55:44.480239 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480256 | orchestrator | 2026-04-05 02:55:44.480272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480288 | orchestrator | Sunday 05 April 2026 02:55:36 +0000 (0:00:00.220) 0:00:17.760 ********** 2026-04-05 02:55:44.480304 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480320 | orchestrator | 2026-04-05 02:55:44.480337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480355 | orchestrator | Sunday 05 April 2026 02:55:36 +0000 (0:00:00.220) 0:00:17.981 ********** 2026-04-05 02:55:44.480402 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480419 | orchestrator | 2026-04-05 02:55:44.480437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480454 | orchestrator | Sunday 05 April 2026 02:55:36 +0000 (0:00:00.205) 0:00:18.186 ********** 2026-04-05 02:55:44.480471 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480525 | orchestrator | 2026-04-05 02:55:44.480541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480557 | orchestrator | Sunday 05 April 2026 02:55:37 +0000 (0:00:00.673) 0:00:18.859 ********** 2026-04-05 02:55:44.480574 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480591 | orchestrator | 2026-04-05 02:55:44.480608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480624 | orchestrator | Sunday 05 April 2026 02:55:37 +0000 (0:00:00.230) 0:00:19.090 ********** 2026-04-05 02:55:44.480639 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480654 | orchestrator | 2026-04-05 02:55:44.480670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480686 | orchestrator | Sunday 05 April 2026 02:55:37 +0000 (0:00:00.210) 0:00:19.300 ********** 2026-04-05 02:55:44.480702 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480717 | orchestrator | 2026-04-05 02:55:44.480733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480748 | orchestrator | Sunday 05 April 2026 02:55:38 +0000 (0:00:00.213) 0:00:19.514 ********** 2026-04-05 02:55:44.480763 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.480778 | orchestrator | 2026-04-05 02:55:44.480794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480810 | orchestrator | Sunday 05 April 2026 02:55:38 +0000 (0:00:00.227) 0:00:19.741 ********** 2026-04-05 02:55:44.480826 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a) 2026-04-05 02:55:44.480844 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a) 2026-04-05 02:55:44.480860 | orchestrator | 2026-04-05 02:55:44.480876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480891 | orchestrator | Sunday 05 April 2026 02:55:38 +0000 (0:00:00.440) 0:00:20.181 ********** 2026-04-05 02:55:44.480907 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55) 2026-04-05 02:55:44.480923 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55) 2026-04-05 02:55:44.480937 | orchestrator | 2026-04-05 02:55:44.480952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.480968 | orchestrator | Sunday 05 April 2026 02:55:39 +0000 (0:00:00.471) 0:00:20.653 ********** 2026-04-05 02:55:44.480984 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c) 2026-04-05 02:55:44.481000 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c) 2026-04-05 02:55:44.481016 | orchestrator | 2026-04-05 02:55:44.481030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.481065 | orchestrator | Sunday 05 April 2026 02:55:39 +0000 (0:00:00.482) 0:00:21.135 ********** 2026-04-05 02:55:44.481078 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c) 2026-04-05 02:55:44.481092 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c) 2026-04-05 02:55:44.481104 | orchestrator | 2026-04-05 02:55:44.481117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:44.481139 | orchestrator | Sunday 05 April 2026 02:55:40 +0000 (0:00:00.439) 0:00:21.575 ********** 2026-04-05 02:55:44.481151 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 02:55:44.481175 | orchestrator | 2026-04-05 02:55:44.481189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481201 | orchestrator | Sunday 05 April 2026 02:55:40 +0000 (0:00:00.371) 0:00:21.947 ********** 2026-04-05 02:55:44.481214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 02:55:44.481227 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 02:55:44.481239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 02:55:44.481252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 02:55:44.481264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 02:55:44.481277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 02:55:44.481290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 02:55:44.481302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 02:55:44.481315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 02:55:44.481328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 02:55:44.481342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 02:55:44.481356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 02:55:44.481369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 02:55:44.481381 | orchestrator | 2026-04-05 02:55:44.481394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481406 | orchestrator | Sunday 05 April 2026 02:55:40 +0000 (0:00:00.396) 0:00:22.344 ********** 2026-04-05 02:55:44.481419 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481431 | orchestrator | 2026-04-05 02:55:44.481444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481457 | orchestrator | Sunday 05 April 2026 02:55:41 +0000 (0:00:00.745) 0:00:23.090 ********** 2026-04-05 02:55:44.481469 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481556 | orchestrator | 2026-04-05 02:55:44.481569 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481580 | orchestrator | Sunday 05 April 2026 02:55:41 +0000 (0:00:00.215) 0:00:23.306 ********** 2026-04-05 02:55:44.481592 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481606 | orchestrator | 2026-04-05 02:55:44.481619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481632 | orchestrator | Sunday 05 April 2026 02:55:42 +0000 (0:00:00.237) 0:00:23.543 ********** 2026-04-05 02:55:44.481644 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481657 | orchestrator | 2026-04-05 02:55:44.481669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481681 | orchestrator | Sunday 05 April 2026 02:55:42 +0000 (0:00:00.221) 0:00:23.764 ********** 2026-04-05 02:55:44.481694 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481706 | orchestrator | 2026-04-05 02:55:44.481719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481732 | orchestrator | Sunday 05 April 2026 02:55:42 +0000 (0:00:00.234) 0:00:23.999 ********** 2026-04-05 02:55:44.481745 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481757 | orchestrator | 2026-04-05 02:55:44.481770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481782 | orchestrator | Sunday 05 April 2026 02:55:42 +0000 (0:00:00.224) 0:00:24.223 ********** 2026-04-05 02:55:44.481794 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481817 | orchestrator | 2026-04-05 02:55:44.481830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481843 | orchestrator | Sunday 05 April 2026 02:55:43 +0000 (0:00:00.237) 0:00:24.461 ********** 2026-04-05 02:55:44.481855 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:44.481868 | orchestrator | 2026-04-05 02:55:44.481880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481893 | orchestrator | Sunday 05 April 2026 02:55:43 +0000 (0:00:00.232) 0:00:24.694 ********** 2026-04-05 02:55:44.481906 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 02:55:44.481920 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 02:55:44.481933 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 02:55:44.481946 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 02:55:44.481958 | orchestrator | 2026-04-05 02:55:44.481971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:44.481984 | orchestrator | Sunday 05 April 2026 02:55:44 +0000 (0:00:00.930) 0:00:25.624 ********** 2026-04-05 02:55:44.481998 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.407398 | orchestrator | 2026-04-05 02:55:51.407636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:51.407668 | orchestrator | Sunday 05 April 2026 02:55:44 +0000 (0:00:00.219) 0:00:25.844 ********** 2026-04-05 02:55:51.407689 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.407710 | orchestrator | 2026-04-05 02:55:51.407729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:51.407749 | orchestrator | Sunday 05 April 2026 02:55:44 +0000 (0:00:00.217) 0:00:26.061 ********** 2026-04-05 02:55:51.407789 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.407810 | orchestrator | 2026-04-05 02:55:51.407828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:55:51.407846 | orchestrator | Sunday 05 April 2026 02:55:45 +0000 (0:00:00.722) 0:00:26.784 ********** 2026-04-05 02:55:51.407862 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.407880 | orchestrator | 2026-04-05 02:55:51.407898 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 02:55:51.407917 | orchestrator | Sunday 05 April 2026 02:55:45 +0000 (0:00:00.209) 0:00:26.994 ********** 2026-04-05 02:55:51.407934 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-05 02:55:51.407952 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-05 02:55:51.407972 | orchestrator | 2026-04-05 02:55:51.407991 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 02:55:51.408009 | orchestrator | Sunday 05 April 2026 02:55:45 +0000 (0:00:00.192) 0:00:27.186 ********** 2026-04-05 02:55:51.408028 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408047 | orchestrator | 2026-04-05 02:55:51.408066 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 02:55:51.408085 | orchestrator | Sunday 05 April 2026 02:55:45 +0000 (0:00:00.147) 0:00:27.334 ********** 2026-04-05 02:55:51.408105 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408125 | orchestrator | 2026-04-05 02:55:51.408166 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 02:55:51.408201 | orchestrator | Sunday 05 April 2026 02:55:46 +0000 (0:00:00.139) 0:00:27.474 ********** 2026-04-05 02:55:51.408221 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408240 | orchestrator | 2026-04-05 02:55:51.408259 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 02:55:51.408273 | orchestrator | Sunday 05 April 2026 02:55:46 +0000 (0:00:00.155) 0:00:27.629 ********** 2026-04-05 02:55:51.408286 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:55:51.408301 | orchestrator | 2026-04-05 02:55:51.408314 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 02:55:51.408325 | orchestrator | Sunday 05 April 2026 02:55:46 +0000 (0:00:00.159) 0:00:27.789 ********** 2026-04-05 02:55:51.408363 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71b5f103-fb0e-5af6-8506-51783512c8b9'}}) 2026-04-05 02:55:51.408376 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8259097b-349e-523a-9f4d-33b374f7dc5d'}}) 2026-04-05 02:55:51.408388 | orchestrator | 2026-04-05 02:55:51.408399 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 02:55:51.408409 | orchestrator | Sunday 05 April 2026 02:55:46 +0000 (0:00:00.172) 0:00:27.962 ********** 2026-04-05 02:55:51.408424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71b5f103-fb0e-5af6-8506-51783512c8b9'}})  2026-04-05 02:55:51.408443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8259097b-349e-523a-9f4d-33b374f7dc5d'}})  2026-04-05 02:55:51.408460 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408505 | orchestrator | 2026-04-05 02:55:51.408523 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 02:55:51.408541 | orchestrator | Sunday 05 April 2026 02:55:46 +0000 (0:00:00.161) 0:00:28.123 ********** 2026-04-05 02:55:51.408559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71b5f103-fb0e-5af6-8506-51783512c8b9'}})  2026-04-05 02:55:51.408577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8259097b-349e-523a-9f4d-33b374f7dc5d'}})  2026-04-05 02:55:51.408597 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408615 | orchestrator | 2026-04-05 02:55:51.408633 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 02:55:51.408649 | orchestrator | Sunday 05 April 2026 02:55:46 +0000 (0:00:00.162) 0:00:28.286 ********** 2026-04-05 02:55:51.408661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71b5f103-fb0e-5af6-8506-51783512c8b9'}})  2026-04-05 02:55:51.408672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8259097b-349e-523a-9f4d-33b374f7dc5d'}})  2026-04-05 02:55:51.408682 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408693 | orchestrator | 2026-04-05 02:55:51.408704 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 02:55:51.408714 | orchestrator | Sunday 05 April 2026 02:55:47 +0000 (0:00:00.161) 0:00:28.447 ********** 2026-04-05 02:55:51.408725 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:55:51.408736 | orchestrator | 2026-04-05 02:55:51.408746 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 02:55:51.408757 | orchestrator | Sunday 05 April 2026 02:55:47 +0000 (0:00:00.172) 0:00:28.620 ********** 2026-04-05 02:55:51.408767 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:55:51.408778 | orchestrator | 2026-04-05 02:55:51.408788 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 02:55:51.408804 | orchestrator | Sunday 05 April 2026 02:55:47 +0000 (0:00:00.154) 0:00:28.774 ********** 2026-04-05 02:55:51.408851 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408870 | orchestrator | 2026-04-05 02:55:51.408887 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 02:55:51.408905 | orchestrator | Sunday 05 April 2026 02:55:47 +0000 (0:00:00.393) 0:00:29.168 ********** 2026-04-05 02:55:51.408922 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.408940 | orchestrator | 2026-04-05 02:55:51.408960 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 02:55:51.408979 | orchestrator | Sunday 05 April 2026 02:55:47 +0000 (0:00:00.157) 0:00:29.325 ********** 2026-04-05 02:55:51.409008 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.409028 | orchestrator | 2026-04-05 02:55:51.409046 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 02:55:51.409064 | orchestrator | Sunday 05 April 2026 02:55:48 +0000 (0:00:00.150) 0:00:29.476 ********** 2026-04-05 02:55:51.409088 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 02:55:51.409099 | orchestrator |  "ceph_osd_devices": { 2026-04-05 02:55:51.409110 | orchestrator |  "sdb": { 2026-04-05 02:55:51.409122 | orchestrator |  "osd_lvm_uuid": "71b5f103-fb0e-5af6-8506-51783512c8b9" 2026-04-05 02:55:51.409133 | orchestrator |  }, 2026-04-05 02:55:51.409144 | orchestrator |  "sdc": { 2026-04-05 02:55:51.409155 | orchestrator |  "osd_lvm_uuid": "8259097b-349e-523a-9f4d-33b374f7dc5d" 2026-04-05 02:55:51.409172 | orchestrator |  } 2026-04-05 02:55:51.409189 | orchestrator |  } 2026-04-05 02:55:51.409206 | orchestrator | } 2026-04-05 02:55:51.409224 | orchestrator | 2026-04-05 02:55:51.409244 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 02:55:51.409262 | orchestrator | Sunday 05 April 2026 02:55:48 +0000 (0:00:00.171) 0:00:29.648 ********** 2026-04-05 02:55:51.409280 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.409293 | orchestrator | 2026-04-05 02:55:51.409311 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 02:55:51.409339 | orchestrator | Sunday 05 April 2026 02:55:48 +0000 (0:00:00.144) 0:00:29.792 ********** 2026-04-05 02:55:51.409360 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.409377 | orchestrator | 2026-04-05 02:55:51.409394 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 02:55:51.409409 | orchestrator | Sunday 05 April 2026 02:55:48 +0000 (0:00:00.144) 0:00:29.936 ********** 2026-04-05 02:55:51.409424 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:55:51.409441 | orchestrator | 2026-04-05 02:55:51.409459 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 02:55:51.409519 | orchestrator | Sunday 05 April 2026 02:55:48 +0000 (0:00:00.161) 0:00:30.098 ********** 2026-04-05 02:55:51.409536 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 02:55:51.409551 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 02:55:51.409567 | orchestrator |  "ceph_osd_devices": { 2026-04-05 02:55:51.409583 | orchestrator |  "sdb": { 2026-04-05 02:55:51.409600 | orchestrator |  "osd_lvm_uuid": "71b5f103-fb0e-5af6-8506-51783512c8b9" 2026-04-05 02:55:51.409618 | orchestrator |  }, 2026-04-05 02:55:51.409635 | orchestrator |  "sdc": { 2026-04-05 02:55:51.409652 | orchestrator |  "osd_lvm_uuid": "8259097b-349e-523a-9f4d-33b374f7dc5d" 2026-04-05 02:55:51.409667 | orchestrator |  } 2026-04-05 02:55:51.409683 | orchestrator |  }, 2026-04-05 02:55:51.409699 | orchestrator |  "lvm_volumes": [ 2026-04-05 02:55:51.409716 | orchestrator |  { 2026-04-05 02:55:51.409734 | orchestrator |  "data": "osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9", 2026-04-05 02:55:51.409751 | orchestrator |  "data_vg": "ceph-71b5f103-fb0e-5af6-8506-51783512c8b9" 2026-04-05 02:55:51.409768 | orchestrator |  }, 2026-04-05 02:55:51.409784 | orchestrator |  { 2026-04-05 02:55:51.409803 | orchestrator |  "data": "osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d", 2026-04-05 02:55:51.409821 | orchestrator |  "data_vg": "ceph-8259097b-349e-523a-9f4d-33b374f7dc5d" 2026-04-05 02:55:51.409840 | orchestrator |  } 2026-04-05 02:55:51.409859 | orchestrator |  ] 2026-04-05 02:55:51.409877 | orchestrator |  } 2026-04-05 02:55:51.409893 | orchestrator | } 2026-04-05 02:55:51.409904 | orchestrator | 2026-04-05 02:55:51.409915 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 02:55:51.409926 | orchestrator | Sunday 05 April 2026 02:55:48 +0000 (0:00:00.236) 0:00:30.334 ********** 2026-04-05 02:55:51.409936 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 02:55:51.409947 | orchestrator | 2026-04-05 02:55:51.409958 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-05 02:55:51.409968 | orchestrator | 2026-04-05 02:55:51.409979 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 02:55:51.409990 | orchestrator | Sunday 05 April 2026 02:55:50 +0000 (0:00:01.457) 0:00:31.792 ********** 2026-04-05 02:55:51.410014 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 02:55:51.410118 | orchestrator | 2026-04-05 02:55:51.410137 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 02:55:51.410155 | orchestrator | Sunday 05 April 2026 02:55:50 +0000 (0:00:00.286) 0:00:32.078 ********** 2026-04-05 02:55:51.410172 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:55:51.410189 | orchestrator | 2026-04-05 02:55:51.410206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:55:51.410225 | orchestrator | Sunday 05 April 2026 02:55:50 +0000 (0:00:00.286) 0:00:32.365 ********** 2026-04-05 02:55:51.410243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 02:55:51.410262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 02:55:51.410281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 02:55:51.410298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 02:55:51.410317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 02:55:51.410347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 02:56:00.425819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 02:56:00.425921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 02:56:00.425952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 02:56:00.425964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 02:56:00.425998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 02:56:00.426008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 02:56:00.426071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 02:56:00.426085 | orchestrator | 2026-04-05 02:56:00.426096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426113 | orchestrator | Sunday 05 April 2026 02:55:51 +0000 (0:00:00.401) 0:00:32.766 ********** 2026-04-05 02:56:00.426120 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426128 | orchestrator | 2026-04-05 02:56:00.426134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426140 | orchestrator | Sunday 05 April 2026 02:55:51 +0000 (0:00:00.241) 0:00:33.008 ********** 2026-04-05 02:56:00.426146 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426154 | orchestrator | 2026-04-05 02:56:00.426176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426194 | orchestrator | Sunday 05 April 2026 02:55:51 +0000 (0:00:00.263) 0:00:33.271 ********** 2026-04-05 02:56:00.426211 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426232 | orchestrator | 2026-04-05 02:56:00.426238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426246 | orchestrator | Sunday 05 April 2026 02:55:52 +0000 (0:00:00.229) 0:00:33.501 ********** 2026-04-05 02:56:00.426253 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426260 | orchestrator | 2026-04-05 02:56:00.426268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426286 | orchestrator | Sunday 05 April 2026 02:55:52 +0000 (0:00:00.222) 0:00:33.723 ********** 2026-04-05 02:56:00.426302 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426312 | orchestrator | 2026-04-05 02:56:00.426339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426357 | orchestrator | Sunday 05 April 2026 02:55:52 +0000 (0:00:00.219) 0:00:33.943 ********** 2026-04-05 02:56:00.426384 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426399 | orchestrator | 2026-04-05 02:56:00.426415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426428 | orchestrator | Sunday 05 April 2026 02:55:52 +0000 (0:00:00.214) 0:00:34.157 ********** 2026-04-05 02:56:00.426437 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426447 | orchestrator | 2026-04-05 02:56:00.426454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426491 | orchestrator | Sunday 05 April 2026 02:55:53 +0000 (0:00:00.690) 0:00:34.848 ********** 2026-04-05 02:56:00.426513 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426522 | orchestrator | 2026-04-05 02:56:00.426539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426546 | orchestrator | Sunday 05 April 2026 02:55:53 +0000 (0:00:00.223) 0:00:35.072 ********** 2026-04-05 02:56:00.426558 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131) 2026-04-05 02:56:00.426573 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131) 2026-04-05 02:56:00.426588 | orchestrator | 2026-04-05 02:56:00.426595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426605 | orchestrator | Sunday 05 April 2026 02:55:54 +0000 (0:00:00.465) 0:00:35.537 ********** 2026-04-05 02:56:00.426615 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564) 2026-04-05 02:56:00.426628 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564) 2026-04-05 02:56:00.426637 | orchestrator | 2026-04-05 02:56:00.426644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426651 | orchestrator | Sunday 05 April 2026 02:55:54 +0000 (0:00:00.499) 0:00:36.037 ********** 2026-04-05 02:56:00.426657 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4) 2026-04-05 02:56:00.426662 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4) 2026-04-05 02:56:00.426668 | orchestrator | 2026-04-05 02:56:00.426674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426680 | orchestrator | Sunday 05 April 2026 02:55:55 +0000 (0:00:00.485) 0:00:36.523 ********** 2026-04-05 02:56:00.426687 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d) 2026-04-05 02:56:00.426693 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d) 2026-04-05 02:56:00.426698 | orchestrator | 2026-04-05 02:56:00.426705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:56:00.426711 | orchestrator | Sunday 05 April 2026 02:55:55 +0000 (0:00:00.542) 0:00:37.065 ********** 2026-04-05 02:56:00.426717 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 02:56:00.426723 | orchestrator | 2026-04-05 02:56:00.426729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.426753 | orchestrator | Sunday 05 April 2026 02:55:56 +0000 (0:00:00.381) 0:00:37.447 ********** 2026-04-05 02:56:00.426761 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 02:56:00.426769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 02:56:00.426776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 02:56:00.426791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 02:56:00.426799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 02:56:00.426807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 02:56:00.426822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 02:56:00.426831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 02:56:00.426837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 02:56:00.426848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 02:56:00.426855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 02:56:00.426862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 02:56:00.426871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 02:56:00.426877 | orchestrator | 2026-04-05 02:56:00.426884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.426890 | orchestrator | Sunday 05 April 2026 02:55:56 +0000 (0:00:00.424) 0:00:37.871 ********** 2026-04-05 02:56:00.426898 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426904 | orchestrator | 2026-04-05 02:56:00.426910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.426917 | orchestrator | Sunday 05 April 2026 02:55:56 +0000 (0:00:00.219) 0:00:38.090 ********** 2026-04-05 02:56:00.426922 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426930 | orchestrator | 2026-04-05 02:56:00.426936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.426943 | orchestrator | Sunday 05 April 2026 02:55:56 +0000 (0:00:00.229) 0:00:38.320 ********** 2026-04-05 02:56:00.426949 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426955 | orchestrator | 2026-04-05 02:56:00.426962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.426968 | orchestrator | Sunday 05 April 2026 02:55:57 +0000 (0:00:00.719) 0:00:39.039 ********** 2026-04-05 02:56:00.426975 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.426981 | orchestrator | 2026-04-05 02:56:00.426987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.426994 | orchestrator | Sunday 05 April 2026 02:55:57 +0000 (0:00:00.227) 0:00:39.266 ********** 2026-04-05 02:56:00.427000 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427006 | orchestrator | 2026-04-05 02:56:00.427013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427019 | orchestrator | Sunday 05 April 2026 02:55:58 +0000 (0:00:00.235) 0:00:39.502 ********** 2026-04-05 02:56:00.427026 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427031 | orchestrator | 2026-04-05 02:56:00.427038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427045 | orchestrator | Sunday 05 April 2026 02:55:58 +0000 (0:00:00.213) 0:00:39.716 ********** 2026-04-05 02:56:00.427050 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427056 | orchestrator | 2026-04-05 02:56:00.427063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427069 | orchestrator | Sunday 05 April 2026 02:55:58 +0000 (0:00:00.210) 0:00:39.927 ********** 2026-04-05 02:56:00.427074 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427081 | orchestrator | 2026-04-05 02:56:00.427087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427093 | orchestrator | Sunday 05 April 2026 02:55:58 +0000 (0:00:00.241) 0:00:40.168 ********** 2026-04-05 02:56:00.427099 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 02:56:00.427105 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 02:56:00.427111 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 02:56:00.427117 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 02:56:00.427123 | orchestrator | 2026-04-05 02:56:00.427134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427141 | orchestrator | Sunday 05 April 2026 02:55:59 +0000 (0:00:00.663) 0:00:40.832 ********** 2026-04-05 02:56:00.427147 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427153 | orchestrator | 2026-04-05 02:56:00.427158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427164 | orchestrator | Sunday 05 April 2026 02:55:59 +0000 (0:00:00.224) 0:00:41.057 ********** 2026-04-05 02:56:00.427170 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427175 | orchestrator | 2026-04-05 02:56:00.427181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427187 | orchestrator | Sunday 05 April 2026 02:55:59 +0000 (0:00:00.275) 0:00:41.333 ********** 2026-04-05 02:56:00.427193 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427199 | orchestrator | 2026-04-05 02:56:00.427204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:56:00.427209 | orchestrator | Sunday 05 April 2026 02:56:00 +0000 (0:00:00.242) 0:00:41.575 ********** 2026-04-05 02:56:00.427215 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:00.427221 | orchestrator | 2026-04-05 02:56:00.427234 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-05 02:56:05.343336 | orchestrator | Sunday 05 April 2026 02:56:00 +0000 (0:00:00.213) 0:00:41.789 ********** 2026-04-05 02:56:05.343410 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-05 02:56:05.343417 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-05 02:56:05.343421 | orchestrator | 2026-04-05 02:56:05.343426 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-05 02:56:05.343442 | orchestrator | Sunday 05 April 2026 02:56:00 +0000 (0:00:00.515) 0:00:42.305 ********** 2026-04-05 02:56:05.343447 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343452 | orchestrator | 2026-04-05 02:56:05.343455 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-05 02:56:05.343459 | orchestrator | Sunday 05 April 2026 02:56:01 +0000 (0:00:00.166) 0:00:42.471 ********** 2026-04-05 02:56:05.343463 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343521 | orchestrator | 2026-04-05 02:56:05.343526 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-05 02:56:05.343530 | orchestrator | Sunday 05 April 2026 02:56:01 +0000 (0:00:00.156) 0:00:42.628 ********** 2026-04-05 02:56:05.343534 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343538 | orchestrator | 2026-04-05 02:56:05.343542 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-05 02:56:05.343546 | orchestrator | Sunday 05 April 2026 02:56:01 +0000 (0:00:00.147) 0:00:42.776 ********** 2026-04-05 02:56:05.343550 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:56:05.343555 | orchestrator | 2026-04-05 02:56:05.343558 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-05 02:56:05.343562 | orchestrator | Sunday 05 April 2026 02:56:01 +0000 (0:00:00.154) 0:00:42.930 ********** 2026-04-05 02:56:05.343567 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee367cf6-46c0-523d-847e-ea936940168f'}}) 2026-04-05 02:56:05.343571 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd286f04f-da20-50d3-800d-bbe3052cfbc3'}}) 2026-04-05 02:56:05.343575 | orchestrator | 2026-04-05 02:56:05.343579 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-05 02:56:05.343583 | orchestrator | Sunday 05 April 2026 02:56:01 +0000 (0:00:00.191) 0:00:43.122 ********** 2026-04-05 02:56:05.343587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee367cf6-46c0-523d-847e-ea936940168f'}})  2026-04-05 02:56:05.343593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd286f04f-da20-50d3-800d-bbe3052cfbc3'}})  2026-04-05 02:56:05.343597 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343616 | orchestrator | 2026-04-05 02:56:05.343620 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-05 02:56:05.343624 | orchestrator | Sunday 05 April 2026 02:56:01 +0000 (0:00:00.161) 0:00:43.284 ********** 2026-04-05 02:56:05.343628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee367cf6-46c0-523d-847e-ea936940168f'}})  2026-04-05 02:56:05.343632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd286f04f-da20-50d3-800d-bbe3052cfbc3'}})  2026-04-05 02:56:05.343636 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343640 | orchestrator | 2026-04-05 02:56:05.343643 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-05 02:56:05.343647 | orchestrator | Sunday 05 April 2026 02:56:02 +0000 (0:00:00.214) 0:00:43.499 ********** 2026-04-05 02:56:05.343651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee367cf6-46c0-523d-847e-ea936940168f'}})  2026-04-05 02:56:05.343655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd286f04f-da20-50d3-800d-bbe3052cfbc3'}})  2026-04-05 02:56:05.343659 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343662 | orchestrator | 2026-04-05 02:56:05.343666 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-05 02:56:05.343670 | orchestrator | Sunday 05 April 2026 02:56:02 +0000 (0:00:00.187) 0:00:43.687 ********** 2026-04-05 02:56:05.343674 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:56:05.343677 | orchestrator | 2026-04-05 02:56:05.343681 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-05 02:56:05.343685 | orchestrator | Sunday 05 April 2026 02:56:02 +0000 (0:00:00.153) 0:00:43.840 ********** 2026-04-05 02:56:05.343689 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:56:05.343713 | orchestrator | 2026-04-05 02:56:05.343718 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-05 02:56:05.343722 | orchestrator | Sunday 05 April 2026 02:56:02 +0000 (0:00:00.187) 0:00:44.028 ********** 2026-04-05 02:56:05.343726 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343729 | orchestrator | 2026-04-05 02:56:05.343733 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-05 02:56:05.343737 | orchestrator | Sunday 05 April 2026 02:56:03 +0000 (0:00:00.398) 0:00:44.426 ********** 2026-04-05 02:56:05.343741 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343745 | orchestrator | 2026-04-05 02:56:05.343749 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-05 02:56:05.343752 | orchestrator | Sunday 05 April 2026 02:56:03 +0000 (0:00:00.145) 0:00:44.572 ********** 2026-04-05 02:56:05.343756 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343760 | orchestrator | 2026-04-05 02:56:05.343764 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-05 02:56:05.343767 | orchestrator | Sunday 05 April 2026 02:56:03 +0000 (0:00:00.155) 0:00:44.727 ********** 2026-04-05 02:56:05.343771 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 02:56:05.343775 | orchestrator |  "ceph_osd_devices": { 2026-04-05 02:56:05.343779 | orchestrator |  "sdb": { 2026-04-05 02:56:05.343796 | orchestrator |  "osd_lvm_uuid": "ee367cf6-46c0-523d-847e-ea936940168f" 2026-04-05 02:56:05.343800 | orchestrator |  }, 2026-04-05 02:56:05.343804 | orchestrator |  "sdc": { 2026-04-05 02:56:05.343808 | orchestrator |  "osd_lvm_uuid": "d286f04f-da20-50d3-800d-bbe3052cfbc3" 2026-04-05 02:56:05.343812 | orchestrator |  } 2026-04-05 02:56:05.343816 | orchestrator |  } 2026-04-05 02:56:05.343820 | orchestrator | } 2026-04-05 02:56:05.343824 | orchestrator | 2026-04-05 02:56:05.343832 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-05 02:56:05.343836 | orchestrator | Sunday 05 April 2026 02:56:03 +0000 (0:00:00.192) 0:00:44.919 ********** 2026-04-05 02:56:05.343840 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343848 | orchestrator | 2026-04-05 02:56:05.343851 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-05 02:56:05.343855 | orchestrator | Sunday 05 April 2026 02:56:03 +0000 (0:00:00.163) 0:00:45.083 ********** 2026-04-05 02:56:05.343859 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343863 | orchestrator | 2026-04-05 02:56:05.343866 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-05 02:56:05.343870 | orchestrator | Sunday 05 April 2026 02:56:03 +0000 (0:00:00.136) 0:00:45.219 ********** 2026-04-05 02:56:05.343874 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:56:05.343877 | orchestrator | 2026-04-05 02:56:05.343881 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-05 02:56:05.343885 | orchestrator | Sunday 05 April 2026 02:56:04 +0000 (0:00:00.156) 0:00:45.375 ********** 2026-04-05 02:56:05.343889 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 02:56:05.343893 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-05 02:56:05.343896 | orchestrator |  "ceph_osd_devices": { 2026-04-05 02:56:05.343900 | orchestrator |  "sdb": { 2026-04-05 02:56:05.343904 | orchestrator |  "osd_lvm_uuid": "ee367cf6-46c0-523d-847e-ea936940168f" 2026-04-05 02:56:05.343908 | orchestrator |  }, 2026-04-05 02:56:05.343912 | orchestrator |  "sdc": { 2026-04-05 02:56:05.343916 | orchestrator |  "osd_lvm_uuid": "d286f04f-da20-50d3-800d-bbe3052cfbc3" 2026-04-05 02:56:05.343919 | orchestrator |  } 2026-04-05 02:56:05.343923 | orchestrator |  }, 2026-04-05 02:56:05.343927 | orchestrator |  "lvm_volumes": [ 2026-04-05 02:56:05.343931 | orchestrator |  { 2026-04-05 02:56:05.343935 | orchestrator |  "data": "osd-block-ee367cf6-46c0-523d-847e-ea936940168f", 2026-04-05 02:56:05.343938 | orchestrator |  "data_vg": "ceph-ee367cf6-46c0-523d-847e-ea936940168f" 2026-04-05 02:56:05.343942 | orchestrator |  }, 2026-04-05 02:56:05.343946 | orchestrator |  { 2026-04-05 02:56:05.343950 | orchestrator |  "data": "osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3", 2026-04-05 02:56:05.343953 | orchestrator |  "data_vg": "ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3" 2026-04-05 02:56:05.343957 | orchestrator |  } 2026-04-05 02:56:05.343961 | orchestrator |  ] 2026-04-05 02:56:05.343965 | orchestrator |  } 2026-04-05 02:56:05.343968 | orchestrator | } 2026-04-05 02:56:05.343972 | orchestrator | 2026-04-05 02:56:05.343976 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-05 02:56:05.343980 | orchestrator | Sunday 05 April 2026 02:56:04 +0000 (0:00:00.224) 0:00:45.600 ********** 2026-04-05 02:56:05.343983 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 02:56:05.343987 | orchestrator | 2026-04-05 02:56:05.343991 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:56:05.343995 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 02:56:05.344000 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 02:56:05.344004 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 02:56:05.344008 | orchestrator | 2026-04-05 02:56:05.344011 | orchestrator | 2026-04-05 02:56:05.344015 | orchestrator | 2026-04-05 02:56:05.344019 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:56:05.344023 | orchestrator | Sunday 05 April 2026 02:56:05 +0000 (0:00:01.086) 0:00:46.687 ********** 2026-04-05 02:56:05.344026 | orchestrator | =============================================================================== 2026-04-05 02:56:05.344049 | orchestrator | Write configuration file ------------------------------------------------ 4.49s 2026-04-05 02:56:05.344058 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2026-04-05 02:56:05.344062 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2026-04-05 02:56:05.344065 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-04-05 02:56:05.344069 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-04-05 02:56:05.344073 | orchestrator | Set DB devices config data ---------------------------------------------- 0.94s 2026-04-05 02:56:05.344077 | orchestrator | Print configuration data ------------------------------------------------ 0.94s 2026-04-05 02:56:05.344080 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.93s 2026-04-05 02:56:05.344084 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-04-05 02:56:05.344088 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2026-04-05 02:56:05.344092 | orchestrator | Get initial list of available block devices ----------------------------- 0.80s 2026-04-05 02:56:05.344095 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-04-05 02:56:05.344099 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-04-05 02:56:05.344106 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-04-05 02:56:05.800713 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-04-05 02:56:05.800824 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-04-05 02:56:05.800841 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-05 02:56:05.800874 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.69s 2026-04-05 02:56:05.800890 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-04-05 02:56:05.800903 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-04-05 02:56:28.426620 | orchestrator | 2026-04-05 02:56:28 | INFO  | Task ece9115f-53db-4af0-bd36-c895ef587a4e (sync inventory) is running in background. Output coming soon. 2026-04-05 02:56:58.832953 | orchestrator | 2026-04-05 02:56:29 | INFO  | Starting group_vars file reorganization 2026-04-05 02:56:58.833041 | orchestrator | 2026-04-05 02:56:29 | INFO  | Moved 0 file(s) to their respective directories 2026-04-05 02:56:58.833051 | orchestrator | 2026-04-05 02:56:29 | INFO  | Group_vars file reorganization completed 2026-04-05 02:56:58.833058 | orchestrator | 2026-04-05 02:56:33 | INFO  | Starting variable preparation from inventory 2026-04-05 02:56:58.833066 | orchestrator | 2026-04-05 02:56:35 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-05 02:56:58.833073 | orchestrator | 2026-04-05 02:56:35 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-05 02:56:58.833079 | orchestrator | 2026-04-05 02:56:35 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-05 02:56:58.833085 | orchestrator | 2026-04-05 02:56:35 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-05 02:56:58.833092 | orchestrator | 2026-04-05 02:56:35 | INFO  | Variable preparation completed 2026-04-05 02:56:58.833098 | orchestrator | 2026-04-05 02:56:37 | INFO  | Starting inventory overwrite handling 2026-04-05 02:56:58.833104 | orchestrator | 2026-04-05 02:56:37 | INFO  | Handling group overwrites in 99-overwrite 2026-04-05 02:56:58.833111 | orchestrator | 2026-04-05 02:56:37 | INFO  | Removing group frr:children from 60-generic 2026-04-05 02:56:58.833117 | orchestrator | 2026-04-05 02:56:37 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-05 02:56:58.833123 | orchestrator | 2026-04-05 02:56:37 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-05 02:56:58.833151 | orchestrator | 2026-04-05 02:56:37 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-05 02:56:58.833158 | orchestrator | 2026-04-05 02:56:37 | INFO  | Handling group overwrites in 20-roles 2026-04-05 02:56:58.833164 | orchestrator | 2026-04-05 02:56:37 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-05 02:56:58.833170 | orchestrator | 2026-04-05 02:56:37 | INFO  | Removed 5 group(s) in total 2026-04-05 02:56:58.833177 | orchestrator | 2026-04-05 02:56:37 | INFO  | Inventory overwrite handling completed 2026-04-05 02:56:58.833183 | orchestrator | 2026-04-05 02:56:38 | INFO  | Starting merge of inventory files 2026-04-05 02:56:58.833189 | orchestrator | 2026-04-05 02:56:38 | INFO  | Inventory files merged successfully 2026-04-05 02:56:58.833195 | orchestrator | 2026-04-05 02:56:44 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-05 02:56:58.833201 | orchestrator | 2026-04-05 02:56:57 | INFO  | Successfully wrote ClusterShell configuration 2026-04-05 02:56:58.833208 | orchestrator | [master 60317a4] 2026-04-05-02-56 2026-04-05 02:56:58.833216 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-04-05 02:57:01.272762 | orchestrator | 2026-04-05 02:57:01 | INFO  | Task 5af7e70d-fe5d-4792-9686-da7393059a1c (ceph-create-lvm-devices) was prepared for execution. 2026-04-05 02:57:01.272904 | orchestrator | 2026-04-05 02:57:01 | INFO  | It takes a moment until task 5af7e70d-fe5d-4792-9686-da7393059a1c (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-05 02:57:14.152804 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 02:57:14.152911 | orchestrator | 2.16.14 2026-04-05 02:57:14.152924 | orchestrator | 2026-04-05 02:57:14.152933 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 02:57:14.152941 | orchestrator | 2026-04-05 02:57:14.152949 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 02:57:14.152961 | orchestrator | Sunday 05 April 2026 02:57:06 +0000 (0:00:00.347) 0:00:00.347 ********** 2026-04-05 02:57:14.152978 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 02:57:14.152995 | orchestrator | 2026-04-05 02:57:14.153008 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 02:57:14.153020 | orchestrator | Sunday 05 April 2026 02:57:06 +0000 (0:00:00.277) 0:00:00.625 ********** 2026-04-05 02:57:14.153031 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:14.153043 | orchestrator | 2026-04-05 02:57:14.153056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153068 | orchestrator | Sunday 05 April 2026 02:57:06 +0000 (0:00:00.248) 0:00:00.873 ********** 2026-04-05 02:57:14.153096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-05 02:57:14.153112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-05 02:57:14.153135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-05 02:57:14.153143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-05 02:57:14.153150 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-05 02:57:14.153157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-05 02:57:14.153165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-05 02:57:14.153172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-05 02:57:14.153179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-05 02:57:14.153187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-05 02:57:14.153212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-05 02:57:14.153220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-05 02:57:14.153227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-05 02:57:14.153234 | orchestrator | 2026-04-05 02:57:14.153241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153248 | orchestrator | Sunday 05 April 2026 02:57:07 +0000 (0:00:00.567) 0:00:01.440 ********** 2026-04-05 02:57:14.153256 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153263 | orchestrator | 2026-04-05 02:57:14.153270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153278 | orchestrator | Sunday 05 April 2026 02:57:07 +0000 (0:00:00.235) 0:00:01.676 ********** 2026-04-05 02:57:14.153285 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153292 | orchestrator | 2026-04-05 02:57:14.153299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153306 | orchestrator | Sunday 05 April 2026 02:57:07 +0000 (0:00:00.225) 0:00:01.902 ********** 2026-04-05 02:57:14.153313 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153321 | orchestrator | 2026-04-05 02:57:14.153328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153337 | orchestrator | Sunday 05 April 2026 02:57:07 +0000 (0:00:00.210) 0:00:02.112 ********** 2026-04-05 02:57:14.153346 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153354 | orchestrator | 2026-04-05 02:57:14.153363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153371 | orchestrator | Sunday 05 April 2026 02:57:08 +0000 (0:00:00.214) 0:00:02.327 ********** 2026-04-05 02:57:14.153380 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153388 | orchestrator | 2026-04-05 02:57:14.153396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153405 | orchestrator | Sunday 05 April 2026 02:57:08 +0000 (0:00:00.221) 0:00:02.548 ********** 2026-04-05 02:57:14.153414 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153422 | orchestrator | 2026-04-05 02:57:14.153430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153438 | orchestrator | Sunday 05 April 2026 02:57:08 +0000 (0:00:00.210) 0:00:02.758 ********** 2026-04-05 02:57:14.153446 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153477 | orchestrator | 2026-04-05 02:57:14.153486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153495 | orchestrator | Sunday 05 April 2026 02:57:08 +0000 (0:00:00.343) 0:00:03.102 ********** 2026-04-05 02:57:14.153504 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153511 | orchestrator | 2026-04-05 02:57:14.153518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153525 | orchestrator | Sunday 05 April 2026 02:57:09 +0000 (0:00:00.215) 0:00:03.317 ********** 2026-04-05 02:57:14.153532 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007) 2026-04-05 02:57:14.153541 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007) 2026-04-05 02:57:14.153548 | orchestrator | 2026-04-05 02:57:14.153556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153580 | orchestrator | Sunday 05 April 2026 02:57:09 +0000 (0:00:00.459) 0:00:03.777 ********** 2026-04-05 02:57:14.153587 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51) 2026-04-05 02:57:14.153595 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51) 2026-04-05 02:57:14.153602 | orchestrator | 2026-04-05 02:57:14.153609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153630 | orchestrator | Sunday 05 April 2026 02:57:10 +0000 (0:00:00.672) 0:00:04.449 ********** 2026-04-05 02:57:14.153649 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6) 2026-04-05 02:57:14.153662 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6) 2026-04-05 02:57:14.153674 | orchestrator | 2026-04-05 02:57:14.153685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153697 | orchestrator | Sunday 05 April 2026 02:57:10 +0000 (0:00:00.719) 0:00:05.169 ********** 2026-04-05 02:57:14.153709 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22) 2026-04-05 02:57:14.153728 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22) 2026-04-05 02:57:14.153739 | orchestrator | 2026-04-05 02:57:14.153752 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:14.153765 | orchestrator | Sunday 05 April 2026 02:57:11 +0000 (0:00:00.961) 0:00:06.131 ********** 2026-04-05 02:57:14.153777 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 02:57:14.153788 | orchestrator | 2026-04-05 02:57:14.153799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.153810 | orchestrator | Sunday 05 April 2026 02:57:12 +0000 (0:00:00.358) 0:00:06.490 ********** 2026-04-05 02:57:14.153822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-05 02:57:14.153834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-05 02:57:14.153846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-05 02:57:14.153857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-05 02:57:14.153869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-05 02:57:14.153881 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-05 02:57:14.153894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-05 02:57:14.153904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-05 02:57:14.153911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-05 02:57:14.153918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-05 02:57:14.153925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-05 02:57:14.153932 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-05 02:57:14.153939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-05 02:57:14.153946 | orchestrator | 2026-04-05 02:57:14.153953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.153960 | orchestrator | Sunday 05 April 2026 02:57:12 +0000 (0:00:00.440) 0:00:06.930 ********** 2026-04-05 02:57:14.153967 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.153975 | orchestrator | 2026-04-05 02:57:14.153982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.153989 | orchestrator | Sunday 05 April 2026 02:57:12 +0000 (0:00:00.204) 0:00:07.134 ********** 2026-04-05 02:57:14.153996 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.154003 | orchestrator | 2026-04-05 02:57:14.154010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.154067 | orchestrator | Sunday 05 April 2026 02:57:13 +0000 (0:00:00.236) 0:00:07.371 ********** 2026-04-05 02:57:14.154077 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.154092 | orchestrator | 2026-04-05 02:57:14.154099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.154107 | orchestrator | Sunday 05 April 2026 02:57:13 +0000 (0:00:00.212) 0:00:07.583 ********** 2026-04-05 02:57:14.154114 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.154121 | orchestrator | 2026-04-05 02:57:14.154128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.154135 | orchestrator | Sunday 05 April 2026 02:57:13 +0000 (0:00:00.212) 0:00:07.796 ********** 2026-04-05 02:57:14.154142 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.154149 | orchestrator | 2026-04-05 02:57:14.154156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.154164 | orchestrator | Sunday 05 April 2026 02:57:13 +0000 (0:00:00.223) 0:00:08.020 ********** 2026-04-05 02:57:14.154171 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.154178 | orchestrator | 2026-04-05 02:57:14.154185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:14.154192 | orchestrator | Sunday 05 April 2026 02:57:13 +0000 (0:00:00.214) 0:00:08.234 ********** 2026-04-05 02:57:14.154199 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:14.154207 | orchestrator | 2026-04-05 02:57:14.154223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:22.691436 | orchestrator | Sunday 05 April 2026 02:57:14 +0000 (0:00:00.213) 0:00:08.448 ********** 2026-04-05 02:57:22.691598 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.691616 | orchestrator | 2026-04-05 02:57:22.691630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:22.691642 | orchestrator | Sunday 05 April 2026 02:57:14 +0000 (0:00:00.660) 0:00:09.108 ********** 2026-04-05 02:57:22.691654 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-05 02:57:22.691666 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-05 02:57:22.691677 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-05 02:57:22.691688 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-05 02:57:22.691699 | orchestrator | 2026-04-05 02:57:22.691710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:22.691721 | orchestrator | Sunday 05 April 2026 02:57:15 +0000 (0:00:00.725) 0:00:09.833 ********** 2026-04-05 02:57:22.691732 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.691743 | orchestrator | 2026-04-05 02:57:22.691754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:22.691765 | orchestrator | Sunday 05 April 2026 02:57:15 +0000 (0:00:00.215) 0:00:10.049 ********** 2026-04-05 02:57:22.691776 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.691786 | orchestrator | 2026-04-05 02:57:22.691814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:22.691826 | orchestrator | Sunday 05 April 2026 02:57:15 +0000 (0:00:00.219) 0:00:10.269 ********** 2026-04-05 02:57:22.691836 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.691847 | orchestrator | 2026-04-05 02:57:22.691858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:22.691869 | orchestrator | Sunday 05 April 2026 02:57:16 +0000 (0:00:00.239) 0:00:10.509 ********** 2026-04-05 02:57:22.691879 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.691890 | orchestrator | 2026-04-05 02:57:22.691901 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 02:57:22.691911 | orchestrator | Sunday 05 April 2026 02:57:16 +0000 (0:00:00.231) 0:00:10.740 ********** 2026-04-05 02:57:22.691922 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.691933 | orchestrator | 2026-04-05 02:57:22.691946 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 02:57:22.691959 | orchestrator | Sunday 05 April 2026 02:57:16 +0000 (0:00:00.150) 0:00:10.891 ********** 2026-04-05 02:57:22.691972 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2b14998b-6337-5d33-8563-647c08b40df2'}}) 2026-04-05 02:57:22.692010 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4671660f-3880-5125-9575-24d25698498a'}}) 2026-04-05 02:57:22.692023 | orchestrator | 2026-04-05 02:57:22.692036 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 02:57:22.692049 | orchestrator | Sunday 05 April 2026 02:57:16 +0000 (0:00:00.200) 0:00:11.091 ********** 2026-04-05 02:57:22.692063 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}) 2026-04-05 02:57:22.692077 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}) 2026-04-05 02:57:22.692090 | orchestrator | 2026-04-05 02:57:22.692103 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 02:57:22.692116 | orchestrator | Sunday 05 April 2026 02:57:18 +0000 (0:00:02.044) 0:00:13.135 ********** 2026-04-05 02:57:22.692128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692142 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692154 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692166 | orchestrator | 2026-04-05 02:57:22.692178 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 02:57:22.692192 | orchestrator | Sunday 05 April 2026 02:57:18 +0000 (0:00:00.161) 0:00:13.297 ********** 2026-04-05 02:57:22.692205 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}) 2026-04-05 02:57:22.692218 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}) 2026-04-05 02:57:22.692229 | orchestrator | 2026-04-05 02:57:22.692240 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 02:57:22.692250 | orchestrator | Sunday 05 April 2026 02:57:20 +0000 (0:00:01.470) 0:00:14.768 ********** 2026-04-05 02:57:22.692261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692283 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692293 | orchestrator | 2026-04-05 02:57:22.692304 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 02:57:22.692315 | orchestrator | Sunday 05 April 2026 02:57:20 +0000 (0:00:00.171) 0:00:14.939 ********** 2026-04-05 02:57:22.692342 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692353 | orchestrator | 2026-04-05 02:57:22.692364 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 02:57:22.692375 | orchestrator | Sunday 05 April 2026 02:57:21 +0000 (0:00:00.393) 0:00:15.333 ********** 2026-04-05 02:57:22.692385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692396 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692407 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692418 | orchestrator | 2026-04-05 02:57:22.692428 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 02:57:22.692439 | orchestrator | Sunday 05 April 2026 02:57:21 +0000 (0:00:00.172) 0:00:15.505 ********** 2026-04-05 02:57:22.692489 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692510 | orchestrator | 2026-04-05 02:57:22.692531 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 02:57:22.692549 | orchestrator | Sunday 05 April 2026 02:57:21 +0000 (0:00:00.142) 0:00:15.647 ********** 2026-04-05 02:57:22.692571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692594 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692605 | orchestrator | 2026-04-05 02:57:22.692615 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 02:57:22.692626 | orchestrator | Sunday 05 April 2026 02:57:21 +0000 (0:00:00.163) 0:00:15.811 ********** 2026-04-05 02:57:22.692637 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692647 | orchestrator | 2026-04-05 02:57:22.692658 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 02:57:22.692668 | orchestrator | Sunday 05 April 2026 02:57:21 +0000 (0:00:00.150) 0:00:15.962 ********** 2026-04-05 02:57:22.692679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692690 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692700 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692711 | orchestrator | 2026-04-05 02:57:22.692722 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 02:57:22.692732 | orchestrator | Sunday 05 April 2026 02:57:21 +0000 (0:00:00.171) 0:00:16.133 ********** 2026-04-05 02:57:22.692743 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:22.692754 | orchestrator | 2026-04-05 02:57:22.692765 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 02:57:22.692776 | orchestrator | Sunday 05 April 2026 02:57:21 +0000 (0:00:00.169) 0:00:16.302 ********** 2026-04-05 02:57:22.692786 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692797 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692808 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692819 | orchestrator | 2026-04-05 02:57:22.692829 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 02:57:22.692840 | orchestrator | Sunday 05 April 2026 02:57:22 +0000 (0:00:00.178) 0:00:16.480 ********** 2026-04-05 02:57:22.692850 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692872 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692882 | orchestrator | 2026-04-05 02:57:22.692893 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 02:57:22.692904 | orchestrator | Sunday 05 April 2026 02:57:22 +0000 (0:00:00.159) 0:00:16.640 ********** 2026-04-05 02:57:22.692914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:22.692925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:22.692943 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692954 | orchestrator | 2026-04-05 02:57:22.692964 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 02:57:22.692975 | orchestrator | Sunday 05 April 2026 02:57:22 +0000 (0:00:00.176) 0:00:16.817 ********** 2026-04-05 02:57:22.692985 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:22.692996 | orchestrator | 2026-04-05 02:57:22.693007 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 02:57:22.693025 | orchestrator | Sunday 05 April 2026 02:57:22 +0000 (0:00:00.174) 0:00:16.992 ********** 2026-04-05 02:57:29.764593 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.764698 | orchestrator | 2026-04-05 02:57:29.764716 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 02:57:29.764730 | orchestrator | Sunday 05 April 2026 02:57:22 +0000 (0:00:00.144) 0:00:17.137 ********** 2026-04-05 02:57:29.764741 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.764753 | orchestrator | 2026-04-05 02:57:29.764764 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 02:57:29.764776 | orchestrator | Sunday 05 April 2026 02:57:23 +0000 (0:00:00.395) 0:00:17.532 ********** 2026-04-05 02:57:29.764786 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 02:57:29.764798 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 02:57:29.764809 | orchestrator | } 2026-04-05 02:57:29.764820 | orchestrator | 2026-04-05 02:57:29.764831 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 02:57:29.764842 | orchestrator | Sunday 05 April 2026 02:57:23 +0000 (0:00:00.150) 0:00:17.683 ********** 2026-04-05 02:57:29.764853 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 02:57:29.764863 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 02:57:29.764874 | orchestrator | } 2026-04-05 02:57:29.764885 | orchestrator | 2026-04-05 02:57:29.764896 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 02:57:29.764921 | orchestrator | Sunday 05 April 2026 02:57:23 +0000 (0:00:00.147) 0:00:17.830 ********** 2026-04-05 02:57:29.764933 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 02:57:29.764944 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 02:57:29.764955 | orchestrator | } 2026-04-05 02:57:29.764966 | orchestrator | 2026-04-05 02:57:29.764977 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 02:57:29.764988 | orchestrator | Sunday 05 April 2026 02:57:23 +0000 (0:00:00.158) 0:00:17.989 ********** 2026-04-05 02:57:29.764999 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:29.765039 | orchestrator | 2026-04-05 02:57:29.765050 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 02:57:29.765062 | orchestrator | Sunday 05 April 2026 02:57:24 +0000 (0:00:00.701) 0:00:18.690 ********** 2026-04-05 02:57:29.765075 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:29.765087 | orchestrator | 2026-04-05 02:57:29.765100 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 02:57:29.765113 | orchestrator | Sunday 05 April 2026 02:57:24 +0000 (0:00:00.540) 0:00:19.230 ********** 2026-04-05 02:57:29.765125 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:29.765138 | orchestrator | 2026-04-05 02:57:29.765151 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 02:57:29.765164 | orchestrator | Sunday 05 April 2026 02:57:25 +0000 (0:00:00.571) 0:00:19.802 ********** 2026-04-05 02:57:29.765177 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:29.765190 | orchestrator | 2026-04-05 02:57:29.765203 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 02:57:29.765216 | orchestrator | Sunday 05 April 2026 02:57:25 +0000 (0:00:00.164) 0:00:19.967 ********** 2026-04-05 02:57:29.765227 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765238 | orchestrator | 2026-04-05 02:57:29.765249 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 02:57:29.765284 | orchestrator | Sunday 05 April 2026 02:57:25 +0000 (0:00:00.124) 0:00:20.092 ********** 2026-04-05 02:57:29.765296 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765306 | orchestrator | 2026-04-05 02:57:29.765317 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 02:57:29.765328 | orchestrator | Sunday 05 April 2026 02:57:25 +0000 (0:00:00.129) 0:00:20.222 ********** 2026-04-05 02:57:29.765339 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 02:57:29.765350 | orchestrator |  "vgs_report": { 2026-04-05 02:57:29.765362 | orchestrator |  "vg": [] 2026-04-05 02:57:29.765373 | orchestrator |  } 2026-04-05 02:57:29.765384 | orchestrator | } 2026-04-05 02:57:29.765395 | orchestrator | 2026-04-05 02:57:29.765406 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 02:57:29.765417 | orchestrator | Sunday 05 April 2026 02:57:26 +0000 (0:00:00.152) 0:00:20.375 ********** 2026-04-05 02:57:29.765428 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765439 | orchestrator | 2026-04-05 02:57:29.765469 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 02:57:29.765481 | orchestrator | Sunday 05 April 2026 02:57:26 +0000 (0:00:00.139) 0:00:20.515 ********** 2026-04-05 02:57:29.765492 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765503 | orchestrator | 2026-04-05 02:57:29.765514 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 02:57:29.765524 | orchestrator | Sunday 05 April 2026 02:57:26 +0000 (0:00:00.386) 0:00:20.901 ********** 2026-04-05 02:57:29.765535 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765545 | orchestrator | 2026-04-05 02:57:29.765556 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 02:57:29.765567 | orchestrator | Sunday 05 April 2026 02:57:26 +0000 (0:00:00.190) 0:00:21.092 ********** 2026-04-05 02:57:29.765577 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765588 | orchestrator | 2026-04-05 02:57:29.765599 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 02:57:29.765610 | orchestrator | Sunday 05 April 2026 02:57:26 +0000 (0:00:00.183) 0:00:21.275 ********** 2026-04-05 02:57:29.765620 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765631 | orchestrator | 2026-04-05 02:57:29.765641 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 02:57:29.765652 | orchestrator | Sunday 05 April 2026 02:57:27 +0000 (0:00:00.155) 0:00:21.431 ********** 2026-04-05 02:57:29.765663 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765673 | orchestrator | 2026-04-05 02:57:29.765684 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 02:57:29.765695 | orchestrator | Sunday 05 April 2026 02:57:27 +0000 (0:00:00.152) 0:00:21.583 ********** 2026-04-05 02:57:29.765705 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765716 | orchestrator | 2026-04-05 02:57:29.765727 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 02:57:29.765738 | orchestrator | Sunday 05 April 2026 02:57:27 +0000 (0:00:00.160) 0:00:21.744 ********** 2026-04-05 02:57:29.765766 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765778 | orchestrator | 2026-04-05 02:57:29.765789 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 02:57:29.765800 | orchestrator | Sunday 05 April 2026 02:57:27 +0000 (0:00:00.155) 0:00:21.900 ********** 2026-04-05 02:57:29.765810 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765822 | orchestrator | 2026-04-05 02:57:29.765833 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 02:57:29.765844 | orchestrator | Sunday 05 April 2026 02:57:27 +0000 (0:00:00.149) 0:00:22.050 ********** 2026-04-05 02:57:29.765854 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765865 | orchestrator | 2026-04-05 02:57:29.765876 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 02:57:29.765887 | orchestrator | Sunday 05 April 2026 02:57:27 +0000 (0:00:00.153) 0:00:22.203 ********** 2026-04-05 02:57:29.765906 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765917 | orchestrator | 2026-04-05 02:57:29.765928 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 02:57:29.765939 | orchestrator | Sunday 05 April 2026 02:57:28 +0000 (0:00:00.160) 0:00:22.363 ********** 2026-04-05 02:57:29.765950 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.765961 | orchestrator | 2026-04-05 02:57:29.765985 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 02:57:29.765997 | orchestrator | Sunday 05 April 2026 02:57:28 +0000 (0:00:00.131) 0:00:22.495 ********** 2026-04-05 02:57:29.766008 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.766076 | orchestrator | 2026-04-05 02:57:29.766090 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 02:57:29.766101 | orchestrator | Sunday 05 April 2026 02:57:28 +0000 (0:00:00.147) 0:00:22.642 ********** 2026-04-05 02:57:29.766112 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.766123 | orchestrator | 2026-04-05 02:57:29.766134 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 02:57:29.766145 | orchestrator | Sunday 05 April 2026 02:57:28 +0000 (0:00:00.394) 0:00:23.037 ********** 2026-04-05 02:57:29.766157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:29.766170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:29.766181 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.766192 | orchestrator | 2026-04-05 02:57:29.766202 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 02:57:29.766213 | orchestrator | Sunday 05 April 2026 02:57:28 +0000 (0:00:00.168) 0:00:23.206 ********** 2026-04-05 02:57:29.766224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:29.766235 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:29.766246 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.766257 | orchestrator | 2026-04-05 02:57:29.766268 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 02:57:29.766279 | orchestrator | Sunday 05 April 2026 02:57:29 +0000 (0:00:00.163) 0:00:23.369 ********** 2026-04-05 02:57:29.766290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:29.766301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:29.766312 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.766322 | orchestrator | 2026-04-05 02:57:29.766333 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 02:57:29.766344 | orchestrator | Sunday 05 April 2026 02:57:29 +0000 (0:00:00.172) 0:00:23.541 ********** 2026-04-05 02:57:29.766355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:29.766365 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:29.766376 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.766387 | orchestrator | 2026-04-05 02:57:29.766398 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 02:57:29.766409 | orchestrator | Sunday 05 April 2026 02:57:29 +0000 (0:00:00.191) 0:00:23.732 ********** 2026-04-05 02:57:29.766427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:29.766438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:29.766467 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:29.766478 | orchestrator | 2026-04-05 02:57:29.766489 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 02:57:29.766501 | orchestrator | Sunday 05 April 2026 02:57:29 +0000 (0:00:00.170) 0:00:23.903 ********** 2026-04-05 02:57:29.766521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:35.607064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:35.607173 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:35.607192 | orchestrator | 2026-04-05 02:57:35.607205 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 02:57:35.607239 | orchestrator | Sunday 05 April 2026 02:57:29 +0000 (0:00:00.162) 0:00:24.066 ********** 2026-04-05 02:57:35.607250 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:35.607273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:35.607284 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:35.607295 | orchestrator | 2026-04-05 02:57:35.607322 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 02:57:35.607334 | orchestrator | Sunday 05 April 2026 02:57:29 +0000 (0:00:00.182) 0:00:24.248 ********** 2026-04-05 02:57:35.607345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:35.607356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:35.607367 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:35.607378 | orchestrator | 2026-04-05 02:57:35.607389 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 02:57:35.607399 | orchestrator | Sunday 05 April 2026 02:57:30 +0000 (0:00:00.170) 0:00:24.418 ********** 2026-04-05 02:57:35.607410 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:35.607422 | orchestrator | 2026-04-05 02:57:35.607433 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 02:57:35.607502 | orchestrator | Sunday 05 April 2026 02:57:30 +0000 (0:00:00.592) 0:00:25.011 ********** 2026-04-05 02:57:35.607526 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:35.607544 | orchestrator | 2026-04-05 02:57:35.607562 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 02:57:35.607579 | orchestrator | Sunday 05 April 2026 02:57:31 +0000 (0:00:00.577) 0:00:25.588 ********** 2026-04-05 02:57:35.607597 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:57:35.607616 | orchestrator | 2026-04-05 02:57:35.607634 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 02:57:35.607654 | orchestrator | Sunday 05 April 2026 02:57:31 +0000 (0:00:00.160) 0:00:25.749 ********** 2026-04-05 02:57:35.607674 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'vg_name': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}) 2026-04-05 02:57:35.607694 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'vg_name': 'ceph-4671660f-3880-5125-9575-24d25698498a'}) 2026-04-05 02:57:35.607741 | orchestrator | 2026-04-05 02:57:35.607761 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 02:57:35.607780 | orchestrator | Sunday 05 April 2026 02:57:31 +0000 (0:00:00.179) 0:00:25.929 ********** 2026-04-05 02:57:35.607801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:35.607820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:35.607839 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:35.607860 | orchestrator | 2026-04-05 02:57:35.607877 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 02:57:35.607897 | orchestrator | Sunday 05 April 2026 02:57:32 +0000 (0:00:00.403) 0:00:26.333 ********** 2026-04-05 02:57:35.607918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:35.607937 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:35.607955 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:35.607966 | orchestrator | 2026-04-05 02:57:35.607977 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 02:57:35.607988 | orchestrator | Sunday 05 April 2026 02:57:32 +0000 (0:00:00.183) 0:00:26.516 ********** 2026-04-05 02:57:35.607998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 02:57:35.608009 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 02:57:35.608020 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:57:35.608030 | orchestrator | 2026-04-05 02:57:35.608041 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 02:57:35.608052 | orchestrator | Sunday 05 April 2026 02:57:32 +0000 (0:00:00.171) 0:00:26.688 ********** 2026-04-05 02:57:35.608082 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 02:57:35.608094 | orchestrator |  "lvm_report": { 2026-04-05 02:57:35.608106 | orchestrator |  "lv": [ 2026-04-05 02:57:35.608117 | orchestrator |  { 2026-04-05 02:57:35.608129 | orchestrator |  "lv_name": "osd-block-2b14998b-6337-5d33-8563-647c08b40df2", 2026-04-05 02:57:35.608142 | orchestrator |  "vg_name": "ceph-2b14998b-6337-5d33-8563-647c08b40df2" 2026-04-05 02:57:35.608160 | orchestrator |  }, 2026-04-05 02:57:35.608190 | orchestrator |  { 2026-04-05 02:57:35.608209 | orchestrator |  "lv_name": "osd-block-4671660f-3880-5125-9575-24d25698498a", 2026-04-05 02:57:35.608227 | orchestrator |  "vg_name": "ceph-4671660f-3880-5125-9575-24d25698498a" 2026-04-05 02:57:35.608244 | orchestrator |  } 2026-04-05 02:57:35.608261 | orchestrator |  ], 2026-04-05 02:57:35.608279 | orchestrator |  "pv": [ 2026-04-05 02:57:35.608296 | orchestrator |  { 2026-04-05 02:57:35.608313 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 02:57:35.608331 | orchestrator |  "vg_name": "ceph-2b14998b-6337-5d33-8563-647c08b40df2" 2026-04-05 02:57:35.608351 | orchestrator |  }, 2026-04-05 02:57:35.608368 | orchestrator |  { 2026-04-05 02:57:35.608397 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 02:57:35.608418 | orchestrator |  "vg_name": "ceph-4671660f-3880-5125-9575-24d25698498a" 2026-04-05 02:57:35.608435 | orchestrator |  } 2026-04-05 02:57:35.608519 | orchestrator |  ] 2026-04-05 02:57:35.608542 | orchestrator |  } 2026-04-05 02:57:35.608561 | orchestrator | } 2026-04-05 02:57:35.608586 | orchestrator | 2026-04-05 02:57:35.608598 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 02:57:35.608616 | orchestrator | 2026-04-05 02:57:35.608632 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 02:57:35.608643 | orchestrator | Sunday 05 April 2026 02:57:32 +0000 (0:00:00.305) 0:00:26.993 ********** 2026-04-05 02:57:35.608654 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-05 02:57:35.608666 | orchestrator | 2026-04-05 02:57:35.608677 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 02:57:35.608687 | orchestrator | Sunday 05 April 2026 02:57:32 +0000 (0:00:00.307) 0:00:27.301 ********** 2026-04-05 02:57:35.608698 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:35.608709 | orchestrator | 2026-04-05 02:57:35.608720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:35.608730 | orchestrator | Sunday 05 April 2026 02:57:33 +0000 (0:00:00.259) 0:00:27.560 ********** 2026-04-05 02:57:35.608741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-05 02:57:35.608752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-05 02:57:35.608762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-05 02:57:35.608773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-05 02:57:35.608784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-05 02:57:35.608795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-05 02:57:35.608805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-05 02:57:35.608816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-05 02:57:35.608826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-05 02:57:35.608837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-05 02:57:35.608847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-05 02:57:35.608858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-05 02:57:35.608868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-05 02:57:35.608879 | orchestrator | 2026-04-05 02:57:35.608890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:35.608904 | orchestrator | Sunday 05 April 2026 02:57:33 +0000 (0:00:00.476) 0:00:28.036 ********** 2026-04-05 02:57:35.608927 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:35.608954 | orchestrator | 2026-04-05 02:57:35.608972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:35.608989 | orchestrator | Sunday 05 April 2026 02:57:33 +0000 (0:00:00.236) 0:00:28.273 ********** 2026-04-05 02:57:35.609008 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:35.609027 | orchestrator | 2026-04-05 02:57:35.609044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:35.609064 | orchestrator | Sunday 05 April 2026 02:57:34 +0000 (0:00:00.712) 0:00:28.986 ********** 2026-04-05 02:57:35.609083 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:35.609101 | orchestrator | 2026-04-05 02:57:35.609120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:35.609137 | orchestrator | Sunday 05 April 2026 02:57:34 +0000 (0:00:00.249) 0:00:29.236 ********** 2026-04-05 02:57:35.609155 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:35.609172 | orchestrator | 2026-04-05 02:57:35.609191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:35.609208 | orchestrator | Sunday 05 April 2026 02:57:35 +0000 (0:00:00.228) 0:00:29.465 ********** 2026-04-05 02:57:35.609239 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:35.609255 | orchestrator | 2026-04-05 02:57:35.609271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:35.609287 | orchestrator | Sunday 05 April 2026 02:57:35 +0000 (0:00:00.214) 0:00:29.679 ********** 2026-04-05 02:57:35.609303 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:35.609318 | orchestrator | 2026-04-05 02:57:35.609352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:47.751433 | orchestrator | Sunday 05 April 2026 02:57:35 +0000 (0:00:00.227) 0:00:29.906 ********** 2026-04-05 02:57:47.751656 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.751676 | orchestrator | 2026-04-05 02:57:47.751689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:47.751701 | orchestrator | Sunday 05 April 2026 02:57:35 +0000 (0:00:00.215) 0:00:30.121 ********** 2026-04-05 02:57:47.751712 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.751723 | orchestrator | 2026-04-05 02:57:47.751734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:47.751745 | orchestrator | Sunday 05 April 2026 02:57:36 +0000 (0:00:00.211) 0:00:30.333 ********** 2026-04-05 02:57:47.751756 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a) 2026-04-05 02:57:47.751768 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a) 2026-04-05 02:57:47.751779 | orchestrator | 2026-04-05 02:57:47.751807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:47.751819 | orchestrator | Sunday 05 April 2026 02:57:36 +0000 (0:00:00.438) 0:00:30.771 ********** 2026-04-05 02:57:47.751830 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55) 2026-04-05 02:57:47.751841 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55) 2026-04-05 02:57:47.751852 | orchestrator | 2026-04-05 02:57:47.751863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:47.751874 | orchestrator | Sunday 05 April 2026 02:57:36 +0000 (0:00:00.467) 0:00:31.239 ********** 2026-04-05 02:57:47.751884 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c) 2026-04-05 02:57:47.751895 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c) 2026-04-05 02:57:47.751906 | orchestrator | 2026-04-05 02:57:47.751917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:47.751928 | orchestrator | Sunday 05 April 2026 02:57:37 +0000 (0:00:00.480) 0:00:31.719 ********** 2026-04-05 02:57:47.751938 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c) 2026-04-05 02:57:47.751949 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c) 2026-04-05 02:57:47.751960 | orchestrator | 2026-04-05 02:57:47.751974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:57:47.751986 | orchestrator | Sunday 05 April 2026 02:57:38 +0000 (0:00:00.743) 0:00:32.463 ********** 2026-04-05 02:57:47.751999 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 02:57:47.752011 | orchestrator | 2026-04-05 02:57:47.752023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752037 | orchestrator | Sunday 05 April 2026 02:57:38 +0000 (0:00:00.617) 0:00:33.080 ********** 2026-04-05 02:57:47.752049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-05 02:57:47.752063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-05 02:57:47.752076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-05 02:57:47.752111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-05 02:57:47.752123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-05 02:57:47.752134 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-05 02:57:47.752144 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-05 02:57:47.752155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-05 02:57:47.752165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-05 02:57:47.752176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-05 02:57:47.752186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-05 02:57:47.752197 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-05 02:57:47.752207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-05 02:57:47.752218 | orchestrator | 2026-04-05 02:57:47.752229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752239 | orchestrator | Sunday 05 April 2026 02:57:39 +0000 (0:00:00.967) 0:00:34.048 ********** 2026-04-05 02:57:47.752250 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752261 | orchestrator | 2026-04-05 02:57:47.752271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752282 | orchestrator | Sunday 05 April 2026 02:57:39 +0000 (0:00:00.234) 0:00:34.283 ********** 2026-04-05 02:57:47.752293 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752303 | orchestrator | 2026-04-05 02:57:47.752314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752325 | orchestrator | Sunday 05 April 2026 02:57:40 +0000 (0:00:00.225) 0:00:34.508 ********** 2026-04-05 02:57:47.752336 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752346 | orchestrator | 2026-04-05 02:57:47.752376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752388 | orchestrator | Sunday 05 April 2026 02:57:40 +0000 (0:00:00.215) 0:00:34.723 ********** 2026-04-05 02:57:47.752398 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752409 | orchestrator | 2026-04-05 02:57:47.752420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752431 | orchestrator | Sunday 05 April 2026 02:57:40 +0000 (0:00:00.224) 0:00:34.947 ********** 2026-04-05 02:57:47.752442 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752487 | orchestrator | 2026-04-05 02:57:47.752499 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752510 | orchestrator | Sunday 05 April 2026 02:57:40 +0000 (0:00:00.224) 0:00:35.172 ********** 2026-04-05 02:57:47.752521 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752532 | orchestrator | 2026-04-05 02:57:47.752543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752554 | orchestrator | Sunday 05 April 2026 02:57:41 +0000 (0:00:00.210) 0:00:35.382 ********** 2026-04-05 02:57:47.752571 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752582 | orchestrator | 2026-04-05 02:57:47.752593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752604 | orchestrator | Sunday 05 April 2026 02:57:41 +0000 (0:00:00.213) 0:00:35.595 ********** 2026-04-05 02:57:47.752614 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752625 | orchestrator | 2026-04-05 02:57:47.752636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752647 | orchestrator | Sunday 05 April 2026 02:57:41 +0000 (0:00:00.224) 0:00:35.820 ********** 2026-04-05 02:57:47.752657 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-05 02:57:47.752677 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-05 02:57:47.752688 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-05 02:57:47.752699 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-05 02:57:47.752710 | orchestrator | 2026-04-05 02:57:47.752721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752732 | orchestrator | Sunday 05 April 2026 02:57:42 +0000 (0:00:00.958) 0:00:36.779 ********** 2026-04-05 02:57:47.752743 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752754 | orchestrator | 2026-04-05 02:57:47.752764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752775 | orchestrator | Sunday 05 April 2026 02:57:43 +0000 (0:00:00.714) 0:00:37.493 ********** 2026-04-05 02:57:47.752786 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752796 | orchestrator | 2026-04-05 02:57:47.752807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752818 | orchestrator | Sunday 05 April 2026 02:57:43 +0000 (0:00:00.197) 0:00:37.691 ********** 2026-04-05 02:57:47.752829 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752840 | orchestrator | 2026-04-05 02:57:47.752851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:57:47.752861 | orchestrator | Sunday 05 April 2026 02:57:43 +0000 (0:00:00.234) 0:00:37.925 ********** 2026-04-05 02:57:47.752872 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752883 | orchestrator | 2026-04-05 02:57:47.752894 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 02:57:47.752904 | orchestrator | Sunday 05 April 2026 02:57:43 +0000 (0:00:00.222) 0:00:38.147 ********** 2026-04-05 02:57:47.752915 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.752926 | orchestrator | 2026-04-05 02:57:47.752936 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 02:57:47.752947 | orchestrator | Sunday 05 April 2026 02:57:44 +0000 (0:00:00.168) 0:00:38.316 ********** 2026-04-05 02:57:47.752958 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71b5f103-fb0e-5af6-8506-51783512c8b9'}}) 2026-04-05 02:57:47.752969 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8259097b-349e-523a-9f4d-33b374f7dc5d'}}) 2026-04-05 02:57:47.752980 | orchestrator | 2026-04-05 02:57:47.753041 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 02:57:47.753053 | orchestrator | Sunday 05 April 2026 02:57:44 +0000 (0:00:00.228) 0:00:38.545 ********** 2026-04-05 02:57:47.753065 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}) 2026-04-05 02:57:47.753077 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}) 2026-04-05 02:57:47.753088 | orchestrator | 2026-04-05 02:57:47.753099 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 02:57:47.753110 | orchestrator | Sunday 05 April 2026 02:57:46 +0000 (0:00:01.909) 0:00:40.454 ********** 2026-04-05 02:57:47.753121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:47.753133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:47.753143 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:47.753154 | orchestrator | 2026-04-05 02:57:47.753165 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 02:57:47.753176 | orchestrator | Sunday 05 April 2026 02:57:46 +0000 (0:00:00.162) 0:00:40.617 ********** 2026-04-05 02:57:47.753187 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}) 2026-04-05 02:57:47.753215 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}) 2026-04-05 02:57:53.843537 | orchestrator | 2026-04-05 02:57:53.843624 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 02:57:53.843634 | orchestrator | Sunday 05 April 2026 02:57:47 +0000 (0:00:01.428) 0:00:42.046 ********** 2026-04-05 02:57:53.843641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:53.843649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:53.843656 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843663 | orchestrator | 2026-04-05 02:57:53.843681 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 02:57:53.843687 | orchestrator | Sunday 05 April 2026 02:57:47 +0000 (0:00:00.183) 0:00:42.229 ********** 2026-04-05 02:57:53.843693 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843699 | orchestrator | 2026-04-05 02:57:53.843705 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 02:57:53.843711 | orchestrator | Sunday 05 April 2026 02:57:48 +0000 (0:00:00.141) 0:00:42.371 ********** 2026-04-05 02:57:53.843716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:53.843722 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:53.843728 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843734 | orchestrator | 2026-04-05 02:57:53.843740 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 02:57:53.843745 | orchestrator | Sunday 05 April 2026 02:57:48 +0000 (0:00:00.181) 0:00:42.553 ********** 2026-04-05 02:57:53.843751 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843757 | orchestrator | 2026-04-05 02:57:53.843762 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 02:57:53.843768 | orchestrator | Sunday 05 April 2026 02:57:48 +0000 (0:00:00.169) 0:00:42.722 ********** 2026-04-05 02:57:53.843774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:53.843779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:53.843785 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843792 | orchestrator | 2026-04-05 02:57:53.843798 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 02:57:53.843803 | orchestrator | Sunday 05 April 2026 02:57:48 +0000 (0:00:00.433) 0:00:43.156 ********** 2026-04-05 02:57:53.843809 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843815 | orchestrator | 2026-04-05 02:57:53.843821 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 02:57:53.843826 | orchestrator | Sunday 05 April 2026 02:57:48 +0000 (0:00:00.142) 0:00:43.299 ********** 2026-04-05 02:57:53.843832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:53.843838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:53.843844 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843849 | orchestrator | 2026-04-05 02:57:53.843855 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 02:57:53.843877 | orchestrator | Sunday 05 April 2026 02:57:49 +0000 (0:00:00.178) 0:00:43.477 ********** 2026-04-05 02:57:53.843883 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:53.843889 | orchestrator | 2026-04-05 02:57:53.843895 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 02:57:53.843901 | orchestrator | Sunday 05 April 2026 02:57:49 +0000 (0:00:00.161) 0:00:43.639 ********** 2026-04-05 02:57:53.843907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:53.843913 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:53.843918 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843924 | orchestrator | 2026-04-05 02:57:53.843930 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 02:57:53.843935 | orchestrator | Sunday 05 April 2026 02:57:49 +0000 (0:00:00.180) 0:00:43.819 ********** 2026-04-05 02:57:53.843941 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:53.843947 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:53.843952 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.843958 | orchestrator | 2026-04-05 02:57:53.843964 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 02:57:53.843982 | orchestrator | Sunday 05 April 2026 02:57:49 +0000 (0:00:00.165) 0:00:43.984 ********** 2026-04-05 02:57:53.843988 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:53.843994 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:53.844000 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844006 | orchestrator | 2026-04-05 02:57:53.844011 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 02:57:53.844017 | orchestrator | Sunday 05 April 2026 02:57:49 +0000 (0:00:00.168) 0:00:44.153 ********** 2026-04-05 02:57:53.844026 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844032 | orchestrator | 2026-04-05 02:57:53.844038 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 02:57:53.844043 | orchestrator | Sunday 05 April 2026 02:57:50 +0000 (0:00:00.199) 0:00:44.352 ********** 2026-04-05 02:57:53.844050 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844057 | orchestrator | 2026-04-05 02:57:53.844063 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 02:57:53.844070 | orchestrator | Sunday 05 April 2026 02:57:50 +0000 (0:00:00.147) 0:00:44.500 ********** 2026-04-05 02:57:53.844076 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844083 | orchestrator | 2026-04-05 02:57:53.844090 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 02:57:53.844097 | orchestrator | Sunday 05 April 2026 02:57:50 +0000 (0:00:00.157) 0:00:44.657 ********** 2026-04-05 02:57:53.844103 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 02:57:53.844110 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 02:57:53.844117 | orchestrator | } 2026-04-05 02:57:53.844124 | orchestrator | 2026-04-05 02:57:53.844131 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 02:57:53.844138 | orchestrator | Sunday 05 April 2026 02:57:50 +0000 (0:00:00.141) 0:00:44.799 ********** 2026-04-05 02:57:53.844145 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 02:57:53.844151 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 02:57:53.844163 | orchestrator | } 2026-04-05 02:57:53.844170 | orchestrator | 2026-04-05 02:57:53.844177 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 02:57:53.844183 | orchestrator | Sunday 05 April 2026 02:57:50 +0000 (0:00:00.142) 0:00:44.941 ********** 2026-04-05 02:57:53.844191 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 02:57:53.844197 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 02:57:53.844204 | orchestrator | } 2026-04-05 02:57:53.844211 | orchestrator | 2026-04-05 02:57:53.844218 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 02:57:53.844225 | orchestrator | Sunday 05 April 2026 02:57:51 +0000 (0:00:00.406) 0:00:45.348 ********** 2026-04-05 02:57:53.844231 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:53.844238 | orchestrator | 2026-04-05 02:57:53.844245 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 02:57:53.844251 | orchestrator | Sunday 05 April 2026 02:57:51 +0000 (0:00:00.553) 0:00:45.901 ********** 2026-04-05 02:57:53.844258 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:53.844264 | orchestrator | 2026-04-05 02:57:53.844271 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 02:57:53.844277 | orchestrator | Sunday 05 April 2026 02:57:52 +0000 (0:00:00.531) 0:00:46.433 ********** 2026-04-05 02:57:53.844284 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:53.844290 | orchestrator | 2026-04-05 02:57:53.844297 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 02:57:53.844304 | orchestrator | Sunday 05 April 2026 02:57:52 +0000 (0:00:00.542) 0:00:46.976 ********** 2026-04-05 02:57:53.844310 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:53.844317 | orchestrator | 2026-04-05 02:57:53.844324 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 02:57:53.844331 | orchestrator | Sunday 05 April 2026 02:57:52 +0000 (0:00:00.161) 0:00:47.137 ********** 2026-04-05 02:57:53.844337 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844344 | orchestrator | 2026-04-05 02:57:53.844351 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 02:57:53.844358 | orchestrator | Sunday 05 April 2026 02:57:52 +0000 (0:00:00.121) 0:00:47.259 ********** 2026-04-05 02:57:53.844365 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844371 | orchestrator | 2026-04-05 02:57:53.844378 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 02:57:53.844385 | orchestrator | Sunday 05 April 2026 02:57:53 +0000 (0:00:00.138) 0:00:47.397 ********** 2026-04-05 02:57:53.844392 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 02:57:53.844399 | orchestrator |  "vgs_report": { 2026-04-05 02:57:53.844406 | orchestrator |  "vg": [] 2026-04-05 02:57:53.844413 | orchestrator |  } 2026-04-05 02:57:53.844419 | orchestrator | } 2026-04-05 02:57:53.844424 | orchestrator | 2026-04-05 02:57:53.844430 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 02:57:53.844436 | orchestrator | Sunday 05 April 2026 02:57:53 +0000 (0:00:00.155) 0:00:47.553 ********** 2026-04-05 02:57:53.844442 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844485 | orchestrator | 2026-04-05 02:57:53.844491 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 02:57:53.844497 | orchestrator | Sunday 05 April 2026 02:57:53 +0000 (0:00:00.147) 0:00:47.700 ********** 2026-04-05 02:57:53.844503 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844508 | orchestrator | 2026-04-05 02:57:53.844514 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 02:57:53.844520 | orchestrator | Sunday 05 April 2026 02:57:53 +0000 (0:00:00.150) 0:00:47.851 ********** 2026-04-05 02:57:53.844525 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844531 | orchestrator | 2026-04-05 02:57:53.844537 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 02:57:53.844543 | orchestrator | Sunday 05 April 2026 02:57:53 +0000 (0:00:00.141) 0:00:47.992 ********** 2026-04-05 02:57:53.844553 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:53.844559 | orchestrator | 2026-04-05 02:57:53.844569 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 02:57:59.029573 | orchestrator | Sunday 05 April 2026 02:57:53 +0000 (0:00:00.149) 0:00:48.142 ********** 2026-04-05 02:57:59.029683 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.029706 | orchestrator | 2026-04-05 02:57:59.029725 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 02:57:59.029751 | orchestrator | Sunday 05 April 2026 02:57:54 +0000 (0:00:00.388) 0:00:48.530 ********** 2026-04-05 02:57:59.029771 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.029790 | orchestrator | 2026-04-05 02:57:59.029807 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 02:57:59.029825 | orchestrator | Sunday 05 April 2026 02:57:54 +0000 (0:00:00.166) 0:00:48.697 ********** 2026-04-05 02:57:59.029842 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.029857 | orchestrator | 2026-04-05 02:57:59.029897 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 02:57:59.029917 | orchestrator | Sunday 05 April 2026 02:57:54 +0000 (0:00:00.149) 0:00:48.846 ********** 2026-04-05 02:57:59.029936 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.029955 | orchestrator | 2026-04-05 02:57:59.029973 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 02:57:59.029990 | orchestrator | Sunday 05 April 2026 02:57:54 +0000 (0:00:00.172) 0:00:49.019 ********** 2026-04-05 02:57:59.030001 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030012 | orchestrator | 2026-04-05 02:57:59.030083 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 02:57:59.030097 | orchestrator | Sunday 05 April 2026 02:57:54 +0000 (0:00:00.151) 0:00:49.171 ********** 2026-04-05 02:57:59.030110 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030123 | orchestrator | 2026-04-05 02:57:59.030136 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 02:57:59.030185 | orchestrator | Sunday 05 April 2026 02:57:55 +0000 (0:00:00.151) 0:00:49.322 ********** 2026-04-05 02:57:59.030199 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030212 | orchestrator | 2026-04-05 02:57:59.030225 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 02:57:59.030238 | orchestrator | Sunday 05 April 2026 02:57:55 +0000 (0:00:00.149) 0:00:49.471 ********** 2026-04-05 02:57:59.030251 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030264 | orchestrator | 2026-04-05 02:57:59.030277 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 02:57:59.030290 | orchestrator | Sunday 05 April 2026 02:57:55 +0000 (0:00:00.150) 0:00:49.621 ********** 2026-04-05 02:57:59.030302 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030315 | orchestrator | 2026-04-05 02:57:59.030328 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 02:57:59.030340 | orchestrator | Sunday 05 April 2026 02:57:55 +0000 (0:00:00.152) 0:00:49.773 ********** 2026-04-05 02:57:59.030354 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030366 | orchestrator | 2026-04-05 02:57:59.030379 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 02:57:59.030392 | orchestrator | Sunday 05 April 2026 02:57:55 +0000 (0:00:00.138) 0:00:49.912 ********** 2026-04-05 02:57:59.030410 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.030430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.030508 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030528 | orchestrator | 2026-04-05 02:57:59.030547 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 02:57:59.030597 | orchestrator | Sunday 05 April 2026 02:57:55 +0000 (0:00:00.164) 0:00:50.077 ********** 2026-04-05 02:57:59.030611 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.030623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.030634 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030644 | orchestrator | 2026-04-05 02:57:59.030655 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 02:57:59.030665 | orchestrator | Sunday 05 April 2026 02:57:55 +0000 (0:00:00.160) 0:00:50.238 ********** 2026-04-05 02:57:59.030676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.030687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.030700 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030718 | orchestrator | 2026-04-05 02:57:59.030736 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 02:57:59.030753 | orchestrator | Sunday 05 April 2026 02:57:56 +0000 (0:00:00.398) 0:00:50.636 ********** 2026-04-05 02:57:59.030771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.030789 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.030808 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030827 | orchestrator | 2026-04-05 02:57:59.030871 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 02:57:59.030883 | orchestrator | Sunday 05 April 2026 02:57:56 +0000 (0:00:00.172) 0:00:50.809 ********** 2026-04-05 02:57:59.030894 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.030905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.030916 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.030927 | orchestrator | 2026-04-05 02:57:59.030946 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 02:57:59.030958 | orchestrator | Sunday 05 April 2026 02:57:56 +0000 (0:00:00.160) 0:00:50.970 ********** 2026-04-05 02:57:59.030968 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.030979 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.030990 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.031001 | orchestrator | 2026-04-05 02:57:59.031011 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 02:57:59.031022 | orchestrator | Sunday 05 April 2026 02:57:56 +0000 (0:00:00.177) 0:00:51.147 ********** 2026-04-05 02:57:59.031033 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.031044 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.031055 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.031076 | orchestrator | 2026-04-05 02:57:59.031087 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 02:57:59.031098 | orchestrator | Sunday 05 April 2026 02:57:57 +0000 (0:00:00.175) 0:00:51.323 ********** 2026-04-05 02:57:59.031108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.031119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.031130 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.031141 | orchestrator | 2026-04-05 02:57:59.031151 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 02:57:59.031162 | orchestrator | Sunday 05 April 2026 02:57:57 +0000 (0:00:00.167) 0:00:51.490 ********** 2026-04-05 02:57:59.031173 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:59.031184 | orchestrator | 2026-04-05 02:57:59.031194 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 02:57:59.031205 | orchestrator | Sunday 05 April 2026 02:57:57 +0000 (0:00:00.565) 0:00:52.056 ********** 2026-04-05 02:57:59.031216 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:59.031226 | orchestrator | 2026-04-05 02:57:59.031237 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 02:57:59.031248 | orchestrator | Sunday 05 April 2026 02:57:58 +0000 (0:00:00.563) 0:00:52.620 ********** 2026-04-05 02:57:59.031259 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:57:59.031272 | orchestrator | 2026-04-05 02:57:59.031291 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 02:57:59.031308 | orchestrator | Sunday 05 April 2026 02:57:58 +0000 (0:00:00.177) 0:00:52.797 ********** 2026-04-05 02:57:59.031326 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'vg_name': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}) 2026-04-05 02:57:59.031345 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'vg_name': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}) 2026-04-05 02:57:59.031361 | orchestrator | 2026-04-05 02:57:59.031379 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 02:57:59.031399 | orchestrator | Sunday 05 April 2026 02:57:58 +0000 (0:00:00.196) 0:00:52.994 ********** 2026-04-05 02:57:59.031416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.031435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:57:59.031485 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:57:59.031505 | orchestrator | 2026-04-05 02:57:59.031525 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 02:57:59.031543 | orchestrator | Sunday 05 April 2026 02:57:58 +0000 (0:00:00.159) 0:00:53.154 ********** 2026-04-05 02:57:59.031561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:57:59.031593 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:58:06.287586 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:58:06.287707 | orchestrator | 2026-04-05 02:58:06.287722 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 02:58:06.287735 | orchestrator | Sunday 05 April 2026 02:57:59 +0000 (0:00:00.173) 0:00:53.327 ********** 2026-04-05 02:58:06.287745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 02:58:06.287795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 02:58:06.287806 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:58:06.287816 | orchestrator | 2026-04-05 02:58:06.287826 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 02:58:06.287837 | orchestrator | Sunday 05 April 2026 02:57:59 +0000 (0:00:00.415) 0:00:53.742 ********** 2026-04-05 02:58:06.287854 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 02:58:06.287870 | orchestrator |  "lvm_report": { 2026-04-05 02:58:06.287889 | orchestrator |  "lv": [ 2026-04-05 02:58:06.287906 | orchestrator |  { 2026-04-05 02:58:06.287924 | orchestrator |  "lv_name": "osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9", 2026-04-05 02:58:06.287943 | orchestrator |  "vg_name": "ceph-71b5f103-fb0e-5af6-8506-51783512c8b9" 2026-04-05 02:58:06.287960 | orchestrator |  }, 2026-04-05 02:58:06.287977 | orchestrator |  { 2026-04-05 02:58:06.287994 | orchestrator |  "lv_name": "osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d", 2026-04-05 02:58:06.288011 | orchestrator |  "vg_name": "ceph-8259097b-349e-523a-9f4d-33b374f7dc5d" 2026-04-05 02:58:06.288028 | orchestrator |  } 2026-04-05 02:58:06.288046 | orchestrator |  ], 2026-04-05 02:58:06.288063 | orchestrator |  "pv": [ 2026-04-05 02:58:06.288079 | orchestrator |  { 2026-04-05 02:58:06.288095 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 02:58:06.288105 | orchestrator |  "vg_name": "ceph-71b5f103-fb0e-5af6-8506-51783512c8b9" 2026-04-05 02:58:06.288116 | orchestrator |  }, 2026-04-05 02:58:06.288125 | orchestrator |  { 2026-04-05 02:58:06.288135 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 02:58:06.288145 | orchestrator |  "vg_name": "ceph-8259097b-349e-523a-9f4d-33b374f7dc5d" 2026-04-05 02:58:06.288154 | orchestrator |  } 2026-04-05 02:58:06.288164 | orchestrator |  ] 2026-04-05 02:58:06.288173 | orchestrator |  } 2026-04-05 02:58:06.288183 | orchestrator | } 2026-04-05 02:58:06.288192 | orchestrator | 2026-04-05 02:58:06.288202 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-05 02:58:06.288212 | orchestrator | 2026-04-05 02:58:06.288221 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 02:58:06.288231 | orchestrator | Sunday 05 April 2026 02:57:59 +0000 (0:00:00.332) 0:00:54.075 ********** 2026-04-05 02:58:06.288240 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-05 02:58:06.288250 | orchestrator | 2026-04-05 02:58:06.288260 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-05 02:58:06.288269 | orchestrator | Sunday 05 April 2026 02:58:00 +0000 (0:00:00.292) 0:00:54.368 ********** 2026-04-05 02:58:06.288279 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:06.288288 | orchestrator | 2026-04-05 02:58:06.288298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288307 | orchestrator | Sunday 05 April 2026 02:58:00 +0000 (0:00:00.262) 0:00:54.630 ********** 2026-04-05 02:58:06.288317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-05 02:58:06.288326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-05 02:58:06.288336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-05 02:58:06.288345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-05 02:58:06.288355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-05 02:58:06.288364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-05 02:58:06.288374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-05 02:58:06.288393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-05 02:58:06.288403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-05 02:58:06.288412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-05 02:58:06.288421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-05 02:58:06.288431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-05 02:58:06.288534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-05 02:58:06.288549 | orchestrator | 2026-04-05 02:58:06.288559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288569 | orchestrator | Sunday 05 April 2026 02:58:00 +0000 (0:00:00.536) 0:00:55.167 ********** 2026-04-05 02:58:06.288578 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288588 | orchestrator | 2026-04-05 02:58:06.288598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288608 | orchestrator | Sunday 05 April 2026 02:58:01 +0000 (0:00:00.230) 0:00:55.398 ********** 2026-04-05 02:58:06.288617 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288627 | orchestrator | 2026-04-05 02:58:06.288636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288665 | orchestrator | Sunday 05 April 2026 02:58:01 +0000 (0:00:00.236) 0:00:55.634 ********** 2026-04-05 02:58:06.288675 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288685 | orchestrator | 2026-04-05 02:58:06.288694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288704 | orchestrator | Sunday 05 April 2026 02:58:01 +0000 (0:00:00.223) 0:00:55.857 ********** 2026-04-05 02:58:06.288714 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288723 | orchestrator | 2026-04-05 02:58:06.288733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288743 | orchestrator | Sunday 05 April 2026 02:58:02 +0000 (0:00:00.757) 0:00:56.615 ********** 2026-04-05 02:58:06.288752 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288762 | orchestrator | 2026-04-05 02:58:06.288772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288781 | orchestrator | Sunday 05 April 2026 02:58:02 +0000 (0:00:00.259) 0:00:56.874 ********** 2026-04-05 02:58:06.288791 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288800 | orchestrator | 2026-04-05 02:58:06.288810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288820 | orchestrator | Sunday 05 April 2026 02:58:02 +0000 (0:00:00.227) 0:00:57.102 ********** 2026-04-05 02:58:06.288829 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288839 | orchestrator | 2026-04-05 02:58:06.288849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288858 | orchestrator | Sunday 05 April 2026 02:58:03 +0000 (0:00:00.261) 0:00:57.363 ********** 2026-04-05 02:58:06.288868 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:06.288877 | orchestrator | 2026-04-05 02:58:06.288887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288897 | orchestrator | Sunday 05 April 2026 02:58:03 +0000 (0:00:00.207) 0:00:57.570 ********** 2026-04-05 02:58:06.288907 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131) 2026-04-05 02:58:06.288917 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131) 2026-04-05 02:58:06.288927 | orchestrator | 2026-04-05 02:58:06.288936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.288944 | orchestrator | Sunday 05 April 2026 02:58:03 +0000 (0:00:00.491) 0:00:58.062 ********** 2026-04-05 02:58:06.288981 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564) 2026-04-05 02:58:06.289000 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564) 2026-04-05 02:58:06.289008 | orchestrator | 2026-04-05 02:58:06.289016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.289024 | orchestrator | Sunday 05 April 2026 02:58:04 +0000 (0:00:00.480) 0:00:58.542 ********** 2026-04-05 02:58:06.289032 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4) 2026-04-05 02:58:06.289040 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4) 2026-04-05 02:58:06.289048 | orchestrator | 2026-04-05 02:58:06.289056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.289064 | orchestrator | Sunday 05 April 2026 02:58:04 +0000 (0:00:00.471) 0:00:59.014 ********** 2026-04-05 02:58:06.289072 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d) 2026-04-05 02:58:06.289080 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d) 2026-04-05 02:58:06.289088 | orchestrator | 2026-04-05 02:58:06.289096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-05 02:58:06.289104 | orchestrator | Sunday 05 April 2026 02:58:05 +0000 (0:00:00.481) 0:00:59.496 ********** 2026-04-05 02:58:06.289112 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-05 02:58:06.289120 | orchestrator | 2026-04-05 02:58:06.289128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:06.289135 | orchestrator | Sunday 05 April 2026 02:58:05 +0000 (0:00:00.377) 0:00:59.873 ********** 2026-04-05 02:58:06.289143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-05 02:58:06.289151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-05 02:58:06.289159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-05 02:58:06.289167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-05 02:58:06.289175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-05 02:58:06.289182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-05 02:58:06.289190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-05 02:58:06.289198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-05 02:58:06.289206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-05 02:58:06.289214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-05 02:58:06.289222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-05 02:58:06.289306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-05 02:58:16.370936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-05 02:58:16.371028 | orchestrator | 2026-04-05 02:58:16.371040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371049 | orchestrator | Sunday 05 April 2026 02:58:06 +0000 (0:00:00.709) 0:01:00.582 ********** 2026-04-05 02:58:16.371057 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371065 | orchestrator | 2026-04-05 02:58:16.371073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371093 | orchestrator | Sunday 05 April 2026 02:58:06 +0000 (0:00:00.239) 0:01:00.822 ********** 2026-04-05 02:58:16.371101 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371125 | orchestrator | 2026-04-05 02:58:16.371132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371140 | orchestrator | Sunday 05 April 2026 02:58:06 +0000 (0:00:00.255) 0:01:01.077 ********** 2026-04-05 02:58:16.371147 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371154 | orchestrator | 2026-04-05 02:58:16.371161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371168 | orchestrator | Sunday 05 April 2026 02:58:07 +0000 (0:00:00.293) 0:01:01.371 ********** 2026-04-05 02:58:16.371175 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371182 | orchestrator | 2026-04-05 02:58:16.371189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371196 | orchestrator | Sunday 05 April 2026 02:58:07 +0000 (0:00:00.236) 0:01:01.607 ********** 2026-04-05 02:58:16.371203 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371211 | orchestrator | 2026-04-05 02:58:16.371218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371225 | orchestrator | Sunday 05 April 2026 02:58:07 +0000 (0:00:00.249) 0:01:01.856 ********** 2026-04-05 02:58:16.371232 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371239 | orchestrator | 2026-04-05 02:58:16.371246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371253 | orchestrator | Sunday 05 April 2026 02:58:07 +0000 (0:00:00.238) 0:01:02.095 ********** 2026-04-05 02:58:16.371260 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371267 | orchestrator | 2026-04-05 02:58:16.371275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371282 | orchestrator | Sunday 05 April 2026 02:58:08 +0000 (0:00:00.221) 0:01:02.316 ********** 2026-04-05 02:58:16.371289 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371296 | orchestrator | 2026-04-05 02:58:16.371303 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371311 | orchestrator | Sunday 05 April 2026 02:58:08 +0000 (0:00:00.239) 0:01:02.556 ********** 2026-04-05 02:58:16.371318 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-05 02:58:16.371326 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-05 02:58:16.371333 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-05 02:58:16.371340 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-05 02:58:16.371348 | orchestrator | 2026-04-05 02:58:16.371355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371362 | orchestrator | Sunday 05 April 2026 02:58:09 +0000 (0:00:01.012) 0:01:03.568 ********** 2026-04-05 02:58:16.371369 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371376 | orchestrator | 2026-04-05 02:58:16.371383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371390 | orchestrator | Sunday 05 April 2026 02:58:10 +0000 (0:00:00.796) 0:01:04.365 ********** 2026-04-05 02:58:16.371397 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371404 | orchestrator | 2026-04-05 02:58:16.371411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371419 | orchestrator | Sunday 05 April 2026 02:58:10 +0000 (0:00:00.241) 0:01:04.606 ********** 2026-04-05 02:58:16.371426 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371433 | orchestrator | 2026-04-05 02:58:16.371547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-05 02:58:16.371557 | orchestrator | Sunday 05 April 2026 02:58:10 +0000 (0:00:00.227) 0:01:04.834 ********** 2026-04-05 02:58:16.371566 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371574 | orchestrator | 2026-04-05 02:58:16.371582 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-05 02:58:16.371591 | orchestrator | Sunday 05 April 2026 02:58:10 +0000 (0:00:00.299) 0:01:05.133 ********** 2026-04-05 02:58:16.371600 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371608 | orchestrator | 2026-04-05 02:58:16.371623 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-05 02:58:16.371632 | orchestrator | Sunday 05 April 2026 02:58:10 +0000 (0:00:00.138) 0:01:05.272 ********** 2026-04-05 02:58:16.371641 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ee367cf6-46c0-523d-847e-ea936940168f'}}) 2026-04-05 02:58:16.371650 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd286f04f-da20-50d3-800d-bbe3052cfbc3'}}) 2026-04-05 02:58:16.371659 | orchestrator | 2026-04-05 02:58:16.371667 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-05 02:58:16.371676 | orchestrator | Sunday 05 April 2026 02:58:11 +0000 (0:00:00.223) 0:01:05.496 ********** 2026-04-05 02:58:16.371686 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}) 2026-04-05 02:58:16.371696 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}) 2026-04-05 02:58:16.371703 | orchestrator | 2026-04-05 02:58:16.371711 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-05 02:58:16.371731 | orchestrator | Sunday 05 April 2026 02:58:13 +0000 (0:00:01.836) 0:01:07.332 ********** 2026-04-05 02:58:16.371739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:16.371748 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:16.371755 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371763 | orchestrator | 2026-04-05 02:58:16.371775 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-05 02:58:16.371782 | orchestrator | Sunday 05 April 2026 02:58:13 +0000 (0:00:00.209) 0:01:07.541 ********** 2026-04-05 02:58:16.371789 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}) 2026-04-05 02:58:16.371797 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}) 2026-04-05 02:58:16.371804 | orchestrator | 2026-04-05 02:58:16.371811 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-05 02:58:16.371818 | orchestrator | Sunday 05 April 2026 02:58:14 +0000 (0:00:01.410) 0:01:08.952 ********** 2026-04-05 02:58:16.371826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:16.371833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:16.371840 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371847 | orchestrator | 2026-04-05 02:58:16.371854 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-05 02:58:16.371861 | orchestrator | Sunday 05 April 2026 02:58:14 +0000 (0:00:00.163) 0:01:09.116 ********** 2026-04-05 02:58:16.371868 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371889 | orchestrator | 2026-04-05 02:58:16.371897 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-05 02:58:16.371912 | orchestrator | Sunday 05 April 2026 02:58:14 +0000 (0:00:00.163) 0:01:09.280 ********** 2026-04-05 02:58:16.371920 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:16.371927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:16.371940 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371948 | orchestrator | 2026-04-05 02:58:16.371955 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-05 02:58:16.371962 | orchestrator | Sunday 05 April 2026 02:58:15 +0000 (0:00:00.391) 0:01:09.671 ********** 2026-04-05 02:58:16.371969 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.371976 | orchestrator | 2026-04-05 02:58:16.371983 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-05 02:58:16.371991 | orchestrator | Sunday 05 April 2026 02:58:15 +0000 (0:00:00.160) 0:01:09.831 ********** 2026-04-05 02:58:16.371998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:16.372005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:16.372012 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.372019 | orchestrator | 2026-04-05 02:58:16.372027 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-05 02:58:16.372034 | orchestrator | Sunday 05 April 2026 02:58:15 +0000 (0:00:00.168) 0:01:10.001 ********** 2026-04-05 02:58:16.372041 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.372048 | orchestrator | 2026-04-05 02:58:16.372055 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-05 02:58:16.372062 | orchestrator | Sunday 05 April 2026 02:58:15 +0000 (0:00:00.159) 0:01:10.160 ********** 2026-04-05 02:58:16.372069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:16.372077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:16.372084 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:16.372091 | orchestrator | 2026-04-05 02:58:16.372098 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-05 02:58:16.372106 | orchestrator | Sunday 05 April 2026 02:58:16 +0000 (0:00:00.180) 0:01:10.341 ********** 2026-04-05 02:58:16.372113 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:16.372120 | orchestrator | 2026-04-05 02:58:16.372128 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-05 02:58:16.372135 | orchestrator | Sunday 05 April 2026 02:58:16 +0000 (0:00:00.154) 0:01:10.496 ********** 2026-04-05 02:58:16.372148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:23.346537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:23.346675 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.346691 | orchestrator | 2026-04-05 02:58:23.346702 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-05 02:58:23.346711 | orchestrator | Sunday 05 April 2026 02:58:16 +0000 (0:00:00.174) 0:01:10.671 ********** 2026-04-05 02:58:23.346736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:23.346745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:23.346752 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.346759 | orchestrator | 2026-04-05 02:58:23.346766 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-05 02:58:23.346773 | orchestrator | Sunday 05 April 2026 02:58:16 +0000 (0:00:00.165) 0:01:10.836 ********** 2026-04-05 02:58:23.346807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:23.346814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:23.346821 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.346827 | orchestrator | 2026-04-05 02:58:23.346834 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-05 02:58:23.346840 | orchestrator | Sunday 05 April 2026 02:58:16 +0000 (0:00:00.173) 0:01:11.010 ********** 2026-04-05 02:58:23.346846 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.346853 | orchestrator | 2026-04-05 02:58:23.346861 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-05 02:58:23.346867 | orchestrator | Sunday 05 April 2026 02:58:16 +0000 (0:00:00.150) 0:01:11.160 ********** 2026-04-05 02:58:23.346876 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.346883 | orchestrator | 2026-04-05 02:58:23.346890 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-05 02:58:23.346898 | orchestrator | Sunday 05 April 2026 02:58:17 +0000 (0:00:00.158) 0:01:11.319 ********** 2026-04-05 02:58:23.346904 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.346911 | orchestrator | 2026-04-05 02:58:23.346918 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-05 02:58:23.346925 | orchestrator | Sunday 05 April 2026 02:58:17 +0000 (0:00:00.371) 0:01:11.690 ********** 2026-04-05 02:58:23.346932 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 02:58:23.346940 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-05 02:58:23.346947 | orchestrator | } 2026-04-05 02:58:23.346953 | orchestrator | 2026-04-05 02:58:23.346960 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-05 02:58:23.346966 | orchestrator | Sunday 05 April 2026 02:58:17 +0000 (0:00:00.157) 0:01:11.848 ********** 2026-04-05 02:58:23.346972 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 02:58:23.346978 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-05 02:58:23.346985 | orchestrator | } 2026-04-05 02:58:23.346992 | orchestrator | 2026-04-05 02:58:23.346998 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-05 02:58:23.347004 | orchestrator | Sunday 05 April 2026 02:58:17 +0000 (0:00:00.159) 0:01:12.008 ********** 2026-04-05 02:58:23.347011 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 02:58:23.347016 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-05 02:58:23.347023 | orchestrator | } 2026-04-05 02:58:23.347031 | orchestrator | 2026-04-05 02:58:23.347039 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-05 02:58:23.347046 | orchestrator | Sunday 05 April 2026 02:58:17 +0000 (0:00:00.154) 0:01:12.163 ********** 2026-04-05 02:58:23.347054 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:23.347062 | orchestrator | 2026-04-05 02:58:23.347070 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-05 02:58:23.347077 | orchestrator | Sunday 05 April 2026 02:58:18 +0000 (0:00:00.550) 0:01:12.714 ********** 2026-04-05 02:58:23.347085 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:23.347092 | orchestrator | 2026-04-05 02:58:23.347100 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-05 02:58:23.347108 | orchestrator | Sunday 05 April 2026 02:58:18 +0000 (0:00:00.547) 0:01:13.261 ********** 2026-04-05 02:58:23.347116 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:23.347123 | orchestrator | 2026-04-05 02:58:23.347132 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-05 02:58:23.347139 | orchestrator | Sunday 05 April 2026 02:58:19 +0000 (0:00:00.572) 0:01:13.833 ********** 2026-04-05 02:58:23.347147 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:23.347155 | orchestrator | 2026-04-05 02:58:23.347163 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-05 02:58:23.347181 | orchestrator | Sunday 05 April 2026 02:58:19 +0000 (0:00:00.156) 0:01:13.990 ********** 2026-04-05 02:58:23.347190 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347198 | orchestrator | 2026-04-05 02:58:23.347205 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-05 02:58:23.347213 | orchestrator | Sunday 05 April 2026 02:58:19 +0000 (0:00:00.132) 0:01:14.122 ********** 2026-04-05 02:58:23.347221 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347228 | orchestrator | 2026-04-05 02:58:23.347236 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-05 02:58:23.347243 | orchestrator | Sunday 05 April 2026 02:58:19 +0000 (0:00:00.139) 0:01:14.262 ********** 2026-04-05 02:58:23.347251 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 02:58:23.347259 | orchestrator |  "vgs_report": { 2026-04-05 02:58:23.347266 | orchestrator |  "vg": [] 2026-04-05 02:58:23.347302 | orchestrator |  } 2026-04-05 02:58:23.347310 | orchestrator | } 2026-04-05 02:58:23.347318 | orchestrator | 2026-04-05 02:58:23.347326 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-05 02:58:23.347334 | orchestrator | Sunday 05 April 2026 02:58:20 +0000 (0:00:00.151) 0:01:14.413 ********** 2026-04-05 02:58:23.347341 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347349 | orchestrator | 2026-04-05 02:58:23.347357 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-05 02:58:23.347364 | orchestrator | Sunday 05 April 2026 02:58:20 +0000 (0:00:00.168) 0:01:14.582 ********** 2026-04-05 02:58:23.347380 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347389 | orchestrator | 2026-04-05 02:58:23.347396 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-05 02:58:23.347404 | orchestrator | Sunday 05 April 2026 02:58:20 +0000 (0:00:00.377) 0:01:14.960 ********** 2026-04-05 02:58:23.347412 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347418 | orchestrator | 2026-04-05 02:58:23.347425 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-05 02:58:23.347431 | orchestrator | Sunday 05 April 2026 02:58:20 +0000 (0:00:00.148) 0:01:15.108 ********** 2026-04-05 02:58:23.347474 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347481 | orchestrator | 2026-04-05 02:58:23.347489 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-05 02:58:23.347495 | orchestrator | Sunday 05 April 2026 02:58:20 +0000 (0:00:00.151) 0:01:15.260 ********** 2026-04-05 02:58:23.347502 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347508 | orchestrator | 2026-04-05 02:58:23.347515 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-05 02:58:23.347521 | orchestrator | Sunday 05 April 2026 02:58:21 +0000 (0:00:00.152) 0:01:15.412 ********** 2026-04-05 02:58:23.347527 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347534 | orchestrator | 2026-04-05 02:58:23.347541 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-05 02:58:23.347548 | orchestrator | Sunday 05 April 2026 02:58:21 +0000 (0:00:00.160) 0:01:15.573 ********** 2026-04-05 02:58:23.347554 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347561 | orchestrator | 2026-04-05 02:58:23.347567 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-05 02:58:23.347573 | orchestrator | Sunday 05 April 2026 02:58:21 +0000 (0:00:00.172) 0:01:15.745 ********** 2026-04-05 02:58:23.347580 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347587 | orchestrator | 2026-04-05 02:58:23.347594 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-05 02:58:23.347600 | orchestrator | Sunday 05 April 2026 02:58:21 +0000 (0:00:00.151) 0:01:15.897 ********** 2026-04-05 02:58:23.347606 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347613 | orchestrator | 2026-04-05 02:58:23.347619 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-05 02:58:23.347624 | orchestrator | Sunday 05 April 2026 02:58:21 +0000 (0:00:00.171) 0:01:16.069 ********** 2026-04-05 02:58:23.347640 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347647 | orchestrator | 2026-04-05 02:58:23.347653 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-05 02:58:23.347660 | orchestrator | Sunday 05 April 2026 02:58:21 +0000 (0:00:00.175) 0:01:16.245 ********** 2026-04-05 02:58:23.347666 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347672 | orchestrator | 2026-04-05 02:58:23.347678 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-05 02:58:23.347684 | orchestrator | Sunday 05 April 2026 02:58:22 +0000 (0:00:00.141) 0:01:16.386 ********** 2026-04-05 02:58:23.347691 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347698 | orchestrator | 2026-04-05 02:58:23.347704 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-05 02:58:23.347711 | orchestrator | Sunday 05 April 2026 02:58:22 +0000 (0:00:00.148) 0:01:16.534 ********** 2026-04-05 02:58:23.347717 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347723 | orchestrator | 2026-04-05 02:58:23.347730 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-05 02:58:23.347737 | orchestrator | Sunday 05 April 2026 02:58:22 +0000 (0:00:00.402) 0:01:16.936 ********** 2026-04-05 02:58:23.347744 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347750 | orchestrator | 2026-04-05 02:58:23.347757 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-05 02:58:23.347763 | orchestrator | Sunday 05 April 2026 02:58:22 +0000 (0:00:00.177) 0:01:17.114 ********** 2026-04-05 02:58:23.347771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:23.347777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:23.347784 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347790 | orchestrator | 2026-04-05 02:58:23.347796 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-05 02:58:23.347802 | orchestrator | Sunday 05 April 2026 02:58:22 +0000 (0:00:00.189) 0:01:17.304 ********** 2026-04-05 02:58:23.347808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:23.347815 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:23.347821 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:23.347827 | orchestrator | 2026-04-05 02:58:23.347833 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-05 02:58:23.347840 | orchestrator | Sunday 05 April 2026 02:58:23 +0000 (0:00:00.174) 0:01:17.478 ********** 2026-04-05 02:58:23.347855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.839931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840086 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840106 | orchestrator | 2026-04-05 02:58:26.840171 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-05 02:58:26.840184 | orchestrator | Sunday 05 April 2026 02:58:23 +0000 (0:00:00.169) 0:01:17.647 ********** 2026-04-05 02:58:26.840192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840229 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840237 | orchestrator | 2026-04-05 02:58:26.840245 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-05 02:58:26.840253 | orchestrator | Sunday 05 April 2026 02:58:23 +0000 (0:00:00.180) 0:01:17.828 ********** 2026-04-05 02:58:26.840260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840275 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840283 | orchestrator | 2026-04-05 02:58:26.840291 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-05 02:58:26.840298 | orchestrator | Sunday 05 April 2026 02:58:23 +0000 (0:00:00.172) 0:01:18.000 ********** 2026-04-05 02:58:26.840306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840321 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840329 | orchestrator | 2026-04-05 02:58:26.840337 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-05 02:58:26.840344 | orchestrator | Sunday 05 April 2026 02:58:23 +0000 (0:00:00.177) 0:01:18.177 ********** 2026-04-05 02:58:26.840352 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840359 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840367 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840374 | orchestrator | 2026-04-05 02:58:26.840381 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-05 02:58:26.840388 | orchestrator | Sunday 05 April 2026 02:58:24 +0000 (0:00:00.196) 0:01:18.374 ********** 2026-04-05 02:58:26.840395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840402 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840410 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840416 | orchestrator | 2026-04-05 02:58:26.840424 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-05 02:58:26.840431 | orchestrator | Sunday 05 April 2026 02:58:24 +0000 (0:00:00.168) 0:01:18.543 ********** 2026-04-05 02:58:26.840457 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:26.840463 | orchestrator | 2026-04-05 02:58:26.840468 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-05 02:58:26.840473 | orchestrator | Sunday 05 April 2026 02:58:24 +0000 (0:00:00.593) 0:01:19.136 ********** 2026-04-05 02:58:26.840478 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:26.840484 | orchestrator | 2026-04-05 02:58:26.840489 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-05 02:58:26.840495 | orchestrator | Sunday 05 April 2026 02:58:25 +0000 (0:00:00.919) 0:01:20.055 ********** 2026-04-05 02:58:26.840500 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:26.840505 | orchestrator | 2026-04-05 02:58:26.840511 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-05 02:58:26.840516 | orchestrator | Sunday 05 April 2026 02:58:25 +0000 (0:00:00.157) 0:01:20.213 ********** 2026-04-05 02:58:26.840529 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'vg_name': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}) 2026-04-05 02:58:26.840535 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'vg_name': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}) 2026-04-05 02:58:26.840541 | orchestrator | 2026-04-05 02:58:26.840547 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-05 02:58:26.840552 | orchestrator | Sunday 05 April 2026 02:58:26 +0000 (0:00:00.205) 0:01:20.418 ********** 2026-04-05 02:58:26.840574 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840584 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840589 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840594 | orchestrator | 2026-04-05 02:58:26.840598 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-05 02:58:26.840603 | orchestrator | Sunday 05 April 2026 02:58:26 +0000 (0:00:00.172) 0:01:20.590 ********** 2026-04-05 02:58:26.840608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840617 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840621 | orchestrator | 2026-04-05 02:58:26.840626 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-05 02:58:26.840630 | orchestrator | Sunday 05 April 2026 02:58:26 +0000 (0:00:00.174) 0:01:20.765 ********** 2026-04-05 02:58:26.840635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 02:58:26.840639 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 02:58:26.840644 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:26.840648 | orchestrator | 2026-04-05 02:58:26.840653 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-05 02:58:26.840658 | orchestrator | Sunday 05 April 2026 02:58:26 +0000 (0:00:00.189) 0:01:20.954 ********** 2026-04-05 02:58:26.840662 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 02:58:26.840667 | orchestrator |  "lvm_report": { 2026-04-05 02:58:26.840672 | orchestrator |  "lv": [ 2026-04-05 02:58:26.840677 | orchestrator |  { 2026-04-05 02:58:26.840682 | orchestrator |  "lv_name": "osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3", 2026-04-05 02:58:26.840687 | orchestrator |  "vg_name": "ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3" 2026-04-05 02:58:26.840692 | orchestrator |  }, 2026-04-05 02:58:26.840696 | orchestrator |  { 2026-04-05 02:58:26.840701 | orchestrator |  "lv_name": "osd-block-ee367cf6-46c0-523d-847e-ea936940168f", 2026-04-05 02:58:26.840706 | orchestrator |  "vg_name": "ceph-ee367cf6-46c0-523d-847e-ea936940168f" 2026-04-05 02:58:26.840710 | orchestrator |  } 2026-04-05 02:58:26.840715 | orchestrator |  ], 2026-04-05 02:58:26.840719 | orchestrator |  "pv": [ 2026-04-05 02:58:26.840724 | orchestrator |  { 2026-04-05 02:58:26.840757 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-05 02:58:26.840762 | orchestrator |  "vg_name": "ceph-ee367cf6-46c0-523d-847e-ea936940168f" 2026-04-05 02:58:26.840767 | orchestrator |  }, 2026-04-05 02:58:26.840771 | orchestrator |  { 2026-04-05 02:58:26.840776 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-05 02:58:26.840788 | orchestrator |  "vg_name": "ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3" 2026-04-05 02:58:26.840793 | orchestrator |  } 2026-04-05 02:58:26.840798 | orchestrator |  ] 2026-04-05 02:58:26.840802 | orchestrator |  } 2026-04-05 02:58:26.840807 | orchestrator | } 2026-04-05 02:58:26.840812 | orchestrator | 2026-04-05 02:58:26.840816 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:58:26.840821 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 02:58:26.840826 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 02:58:26.840830 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-05 02:58:26.840835 | orchestrator | 2026-04-05 02:58:26.840839 | orchestrator | 2026-04-05 02:58:26.840877 | orchestrator | 2026-04-05 02:58:26.840882 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:58:26.840887 | orchestrator | Sunday 05 April 2026 02:58:26 +0000 (0:00:00.168) 0:01:21.123 ********** 2026-04-05 02:58:26.840891 | orchestrator | =============================================================================== 2026-04-05 02:58:26.840896 | orchestrator | Create block VGs -------------------------------------------------------- 5.79s 2026-04-05 02:58:26.840900 | orchestrator | Create block LVs -------------------------------------------------------- 4.31s 2026-04-05 02:58:26.840905 | orchestrator | Add known partitions to the list of available block devices ------------- 2.12s 2026-04-05 02:58:26.840910 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 2.06s 2026-04-05 02:58:26.840914 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.80s 2026-04-05 02:58:26.840919 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.75s 2026-04-05 02:58:26.840923 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.69s 2026-04-05 02:58:26.840928 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2026-04-05 02:58:26.840937 | orchestrator | Add known links to the list of available block devices ------------------ 1.58s 2026-04-05 02:58:27.333511 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2026-04-05 02:58:27.333590 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-04-05 02:58:27.333598 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2026-04-05 02:58:27.333619 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 0.92s 2026-04-05 02:58:27.333624 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.91s 2026-04-05 02:58:27.333629 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2026-04-05 02:58:27.333634 | orchestrator | Print LVM report data --------------------------------------------------- 0.81s 2026-04-05 02:58:27.333638 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-04-05 02:58:27.333643 | orchestrator | Fail if WAL LV defined in lvm_volumes is missing ------------------------ 0.78s 2026-04-05 02:58:27.333648 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2026-04-05 02:58:27.333652 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.77s 2026-04-05 02:58:40.042430 | orchestrator | 2026-04-05 02:58:40 | INFO  | Task e2e4229c-6758-4a4a-b80f-156734852f49 (facts) was prepared for execution. 2026-04-05 02:58:40.042592 | orchestrator | 2026-04-05 02:58:40 | INFO  | It takes a moment until task e2e4229c-6758-4a4a-b80f-156734852f49 (facts) has been started and output is visible here. 2026-04-05 02:58:54.294829 | orchestrator | 2026-04-05 02:58:54.294959 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 02:58:54.295062 | orchestrator | 2026-04-05 02:58:54.295080 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 02:58:54.295091 | orchestrator | Sunday 05 April 2026 02:58:44 +0000 (0:00:00.306) 0:00:00.306 ********** 2026-04-05 02:58:54.295104 | orchestrator | ok: [testbed-manager] 2026-04-05 02:58:54.295120 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:58:54.295134 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:58:54.295195 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:58:54.295210 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:58:54.295378 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:58:54.295393 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:54.295408 | orchestrator | 2026-04-05 02:58:54.295423 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 02:58:54.295459 | orchestrator | Sunday 05 April 2026 02:58:46 +0000 (0:00:01.220) 0:00:01.527 ********** 2026-04-05 02:58:54.295473 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:58:54.295489 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:58:54.295504 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:58:54.295519 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:58:54.295534 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:58:54.295548 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:58:54.295564 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:54.295578 | orchestrator | 2026-04-05 02:58:54.295593 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 02:58:54.295605 | orchestrator | 2026-04-05 02:58:54.295615 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 02:58:54.295625 | orchestrator | Sunday 05 April 2026 02:58:47 +0000 (0:00:01.470) 0:00:02.997 ********** 2026-04-05 02:58:54.295636 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:58:54.295650 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:58:54.295664 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:58:54.295679 | orchestrator | ok: [testbed-manager] 2026-04-05 02:58:54.295694 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:58:54.295718 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:58:54.295762 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:58:54.295777 | orchestrator | 2026-04-05 02:58:54.295792 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 02:58:54.295806 | orchestrator | 2026-04-05 02:58:54.295821 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 02:58:54.295860 | orchestrator | Sunday 05 April 2026 02:58:53 +0000 (0:00:05.622) 0:00:08.619 ********** 2026-04-05 02:58:54.295874 | orchestrator | skipping: [testbed-manager] 2026-04-05 02:58:54.295889 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:58:54.295903 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:58:54.295917 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:58:54.295983 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:58:54.295997 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:58:54.296011 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:58:54.296026 | orchestrator | 2026-04-05 02:58:54.296040 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 02:58:54.296055 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:58:54.296070 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:58:54.296084 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:58:54.296098 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:58:54.296126 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:58:54.296166 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:58:54.296180 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 02:58:54.296192 | orchestrator | 2026-04-05 02:58:54.296203 | orchestrator | 2026-04-05 02:58:54.296217 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 02:58:54.296248 | orchestrator | Sunday 05 April 2026 02:58:53 +0000 (0:00:00.650) 0:00:09.269 ********** 2026-04-05 02:58:54.296261 | orchestrator | =============================================================================== 2026-04-05 02:58:54.296274 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.62s 2026-04-05 02:58:54.296288 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.47s 2026-04-05 02:58:54.296302 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-04-05 02:58:54.296315 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-04-05 02:58:57.178841 | orchestrator | 2026-04-05 02:58:57 | INFO  | Task 2d37fcd9-2b35-45bc-b810-d715f4db720f (ceph) was prepared for execution. 2026-04-05 02:58:57.178949 | orchestrator | 2026-04-05 02:58:57 | INFO  | It takes a moment until task 2d37fcd9-2b35-45bc-b810-d715f4db720f (ceph) has been started and output is visible here. 2026-04-05 02:59:17.005549 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 02:59:17.005688 | orchestrator | 2.16.14 2026-04-05 02:59:17.005714 | orchestrator | 2026-04-05 02:59:17.005733 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-05 02:59:17.005749 | orchestrator | 2026-04-05 02:59:17.005766 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 02:59:17.005783 | orchestrator | Sunday 05 April 2026 02:59:02 +0000 (0:00:00.895) 0:00:00.895 ********** 2026-04-05 02:59:17.005801 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:59:17.005812 | orchestrator | 2026-04-05 02:59:17.005822 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 02:59:17.005831 | orchestrator | Sunday 05 April 2026 02:59:04 +0000 (0:00:01.341) 0:00:02.236 ********** 2026-04-05 02:59:17.005841 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.005851 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.005861 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.005870 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.005880 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.005889 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.005900 | orchestrator | 2026-04-05 02:59:17.005909 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 02:59:17.005919 | orchestrator | Sunday 05 April 2026 02:59:05 +0000 (0:00:01.311) 0:00:03.548 ********** 2026-04-05 02:59:17.005929 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.005938 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.005948 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.005957 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.005967 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.005976 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.005985 | orchestrator | 2026-04-05 02:59:17.005995 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 02:59:17.006005 | orchestrator | Sunday 05 April 2026 02:59:06 +0000 (0:00:00.896) 0:00:04.444 ********** 2026-04-05 02:59:17.006014 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.006077 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.006088 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.006100 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.006136 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.006147 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.006159 | orchestrator | 2026-04-05 02:59:17.006171 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 02:59:17.006183 | orchestrator | Sunday 05 April 2026 02:59:07 +0000 (0:00:01.026) 0:00:05.470 ********** 2026-04-05 02:59:17.006194 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.006204 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.006213 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.006222 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.006232 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.006241 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.006250 | orchestrator | 2026-04-05 02:59:17.006260 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 02:59:17.006269 | orchestrator | Sunday 05 April 2026 02:59:08 +0000 (0:00:00.901) 0:00:06.372 ********** 2026-04-05 02:59:17.006279 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.006288 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.006297 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.006307 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.006316 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.006325 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.006335 | orchestrator | 2026-04-05 02:59:17.006344 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 02:59:17.006353 | orchestrator | Sunday 05 April 2026 02:59:08 +0000 (0:00:00.643) 0:00:07.016 ********** 2026-04-05 02:59:17.006363 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.006372 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.006382 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.006391 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.006400 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.006409 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.006419 | orchestrator | 2026-04-05 02:59:17.006457 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 02:59:17.006475 | orchestrator | Sunday 05 April 2026 02:59:09 +0000 (0:00:00.897) 0:00:07.913 ********** 2026-04-05 02:59:17.006487 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:17.006499 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:17.006508 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:17.006518 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:17.006528 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:17.006537 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:17.006547 | orchestrator | 2026-04-05 02:59:17.006556 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 02:59:17.006566 | orchestrator | Sunday 05 April 2026 02:59:10 +0000 (0:00:00.645) 0:00:08.559 ********** 2026-04-05 02:59:17.006576 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.006585 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.006594 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.006603 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.006613 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.006635 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.006645 | orchestrator | 2026-04-05 02:59:17.006654 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 02:59:17.006664 | orchestrator | Sunday 05 April 2026 02:59:11 +0000 (0:00:00.862) 0:00:09.422 ********** 2026-04-05 02:59:17.006674 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 02:59:17.006683 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 02:59:17.006693 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 02:59:17.006702 | orchestrator | 2026-04-05 02:59:17.006711 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 02:59:17.006721 | orchestrator | Sunday 05 April 2026 02:59:12 +0000 (0:00:00.727) 0:00:10.149 ********** 2026-04-05 02:59:17.006738 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:17.006747 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:17.006757 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:17.006787 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:17.006797 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:17.006807 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:17.006816 | orchestrator | 2026-04-05 02:59:17.006825 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 02:59:17.006835 | orchestrator | Sunday 05 April 2026 02:59:12 +0000 (0:00:00.820) 0:00:10.970 ********** 2026-04-05 02:59:17.006844 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 02:59:17.006854 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 02:59:17.006863 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 02:59:17.006873 | orchestrator | 2026-04-05 02:59:17.006883 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 02:59:17.006892 | orchestrator | Sunday 05 April 2026 02:59:15 +0000 (0:00:02.561) 0:00:13.531 ********** 2026-04-05 02:59:17.006902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 02:59:17.006912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 02:59:17.006922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 02:59:17.006931 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:17.006941 | orchestrator | 2026-04-05 02:59:17.006950 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 02:59:17.006960 | orchestrator | Sunday 05 April 2026 02:59:15 +0000 (0:00:00.453) 0:00:13.984 ********** 2026-04-05 02:59:17.006971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 02:59:17.006984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 02:59:17.006994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 02:59:17.007003 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:17.007013 | orchestrator | 2026-04-05 02:59:17.007022 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 02:59:17.007032 | orchestrator | Sunday 05 April 2026 02:59:16 +0000 (0:00:00.649) 0:00:14.634 ********** 2026-04-05 02:59:17.007044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:17.007057 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:17.007067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:17.007082 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:17.007092 | orchestrator | 2026-04-05 02:59:17.007107 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 02:59:17.007117 | orchestrator | Sunday 05 April 2026 02:59:16 +0000 (0:00:00.196) 0:00:14.831 ********** 2026-04-05 02:59:17.007136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 02:59:13.936921', 'end': '2026-04-05 02:59:13.986644', 'delta': '0:00:00.049723', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 02:59:27.414767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 02:59:14.522473', 'end': '2026-04-05 02:59:14.570238', 'delta': '0:00:00.047765', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 02:59:27.414882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 02:59:15.074266', 'end': '2026-04-05 02:59:15.113115', 'delta': '0:00:00.038849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 02:59:27.414900 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.414914 | orchestrator | 2026-04-05 02:59:27.414931 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 02:59:27.414952 | orchestrator | Sunday 05 April 2026 02:59:16 +0000 (0:00:00.216) 0:00:15.048 ********** 2026-04-05 02:59:27.414972 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:27.415000 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:27.415026 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:27.415043 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:27.415060 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:27.415078 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:27.415095 | orchestrator | 2026-04-05 02:59:27.415112 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 02:59:27.415128 | orchestrator | Sunday 05 April 2026 02:59:17 +0000 (0:00:00.744) 0:00:15.792 ********** 2026-04-05 02:59:27.415145 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 02:59:27.415163 | orchestrator | 2026-04-05 02:59:27.415180 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 02:59:27.415197 | orchestrator | Sunday 05 April 2026 02:59:18 +0000 (0:00:00.893) 0:00:16.685 ********** 2026-04-05 02:59:27.415249 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415269 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.415288 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.415302 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.415315 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.415327 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.415340 | orchestrator | 2026-04-05 02:59:27.415352 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 02:59:27.415366 | orchestrator | Sunday 05 April 2026 02:59:19 +0000 (0:00:00.863) 0:00:17.549 ********** 2026-04-05 02:59:27.415380 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415392 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.415404 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.415415 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.415454 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.415466 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.415476 | orchestrator | 2026-04-05 02:59:27.415487 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 02:59:27.415498 | orchestrator | Sunday 05 April 2026 02:59:20 +0000 (0:00:01.179) 0:00:18.728 ********** 2026-04-05 02:59:27.415509 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415520 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.415530 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.415541 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.415552 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.415578 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.415589 | orchestrator | 2026-04-05 02:59:27.415600 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 02:59:27.415611 | orchestrator | Sunday 05 April 2026 02:59:21 +0000 (0:00:00.613) 0:00:19.341 ********** 2026-04-05 02:59:27.415622 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415632 | orchestrator | 2026-04-05 02:59:27.415643 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 02:59:27.415654 | orchestrator | Sunday 05 April 2026 02:59:21 +0000 (0:00:00.137) 0:00:19.479 ********** 2026-04-05 02:59:27.415665 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415676 | orchestrator | 2026-04-05 02:59:27.415687 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 02:59:27.415697 | orchestrator | Sunday 05 April 2026 02:59:21 +0000 (0:00:00.252) 0:00:19.731 ********** 2026-04-05 02:59:27.415708 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415719 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.415729 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.415740 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.415751 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.415763 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.415774 | orchestrator | 2026-04-05 02:59:27.415805 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 02:59:27.415816 | orchestrator | Sunday 05 April 2026 02:59:22 +0000 (0:00:00.880) 0:00:20.612 ********** 2026-04-05 02:59:27.415827 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415838 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.415849 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.415859 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.415870 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.415881 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.415892 | orchestrator | 2026-04-05 02:59:27.415902 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 02:59:27.415913 | orchestrator | Sunday 05 April 2026 02:59:23 +0000 (0:00:00.669) 0:00:21.282 ********** 2026-04-05 02:59:27.415924 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.415935 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.415945 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.415966 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.415977 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.415988 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.415998 | orchestrator | 2026-04-05 02:59:27.416009 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 02:59:27.416020 | orchestrator | Sunday 05 April 2026 02:59:24 +0000 (0:00:00.923) 0:00:22.205 ********** 2026-04-05 02:59:27.416031 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.416042 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.416052 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.416063 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.416074 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.416084 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.416095 | orchestrator | 2026-04-05 02:59:27.416106 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 02:59:27.416117 | orchestrator | Sunday 05 April 2026 02:59:24 +0000 (0:00:00.687) 0:00:22.893 ********** 2026-04-05 02:59:27.416127 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.416138 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.416148 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.416159 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.416170 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.416180 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.416191 | orchestrator | 2026-04-05 02:59:27.416202 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 02:59:27.416213 | orchestrator | Sunday 05 April 2026 02:59:25 +0000 (0:00:00.849) 0:00:23.742 ********** 2026-04-05 02:59:27.416224 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.416234 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.416245 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.416255 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.416266 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.416277 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.416287 | orchestrator | 2026-04-05 02:59:27.416299 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 02:59:27.416310 | orchestrator | Sunday 05 April 2026 02:59:26 +0000 (0:00:00.684) 0:00:24.426 ********** 2026-04-05 02:59:27.416321 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.416332 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.416343 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:27.416353 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:27.416364 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:27.416375 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:27.416385 | orchestrator | 2026-04-05 02:59:27.416396 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 02:59:27.416407 | orchestrator | Sunday 05 April 2026 02:59:27 +0000 (0:00:00.901) 0:00:25.328 ********** 2026-04-05 02:59:27.416419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.416459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.416487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.527989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.528013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.528021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.528027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.528042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.528053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.652401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.652860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.652878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.652910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.652931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.922948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.923077 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:27.923106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923266 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:27.923278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:27.923368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.923391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:27.923411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.169744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.169875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.169904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.169960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.170000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.170096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.170121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.170141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.170189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.170212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.170246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.170287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.170307 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:28.170328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.170372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.319965 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:28.319974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.319981 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:28.319987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.319993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.320004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.320010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.320017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.320023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.320037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.648584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 02:59:28.648711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.648726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 02:59:28.648735 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:28.648743 | orchestrator | 2026-04-05 02:59:28.648752 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 02:59:28.648760 | orchestrator | Sunday 05 April 2026 02:59:28 +0000 (0:00:01.143) 0:00:26.472 ********** 2026-04-05 02:59:28.648782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.648797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.648804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.648814 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.648824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.648832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.648839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.648857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739627 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:28.739639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.010972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011074 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.011168 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301574 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301623 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:29.301646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301762 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301816 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:29.301828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.301850 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.396849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.396944 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.396972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.396992 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.396999 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.397006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.397016 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.397023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.397030 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.397065 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568679 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568715 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568727 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568765 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568784 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568803 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:29.568815 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568826 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568836 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568846 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.568864 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815217 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815369 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815395 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815462 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815505 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815519 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:29.815532 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:29.815544 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815556 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815567 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815579 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815590 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:29.815622 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:37.033533 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:37.033662 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:37.033695 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:37.033769 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 02:59:37.033786 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:37.033799 | orchestrator | 2026-04-05 02:59:37.033811 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 02:59:37.033824 | orchestrator | Sunday 05 April 2026 02:59:29 +0000 (0:00:01.389) 0:00:27.861 ********** 2026-04-05 02:59:37.033835 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:37.033846 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:37.033857 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:37.033868 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:37.033878 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:37.033889 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:37.033899 | orchestrator | 2026-04-05 02:59:37.033919 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 02:59:37.033938 | orchestrator | Sunday 05 April 2026 02:59:30 +0000 (0:00:00.987) 0:00:28.849 ********** 2026-04-05 02:59:37.033953 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:37.033964 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:37.033975 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:37.033985 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:37.033996 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:37.034006 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:37.034072 | orchestrator | 2026-04-05 02:59:37.034085 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 02:59:37.034095 | orchestrator | Sunday 05 April 2026 02:59:31 +0000 (0:00:00.876) 0:00:29.726 ********** 2026-04-05 02:59:37.034106 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:37.034117 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:37.034128 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:37.034139 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:37.034149 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:37.034160 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:37.034170 | orchestrator | 2026-04-05 02:59:37.034181 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 02:59:37.034193 | orchestrator | Sunday 05 April 2026 02:59:32 +0000 (0:00:00.654) 0:00:30.380 ********** 2026-04-05 02:59:37.034203 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:37.034214 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:37.034225 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:37.034235 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:37.034246 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:37.034257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:37.034267 | orchestrator | 2026-04-05 02:59:37.034278 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 02:59:37.034289 | orchestrator | Sunday 05 April 2026 02:59:33 +0000 (0:00:00.877) 0:00:31.258 ********** 2026-04-05 02:59:37.034308 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:37.034327 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:37.034347 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:37.034377 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:37.034393 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:37.034409 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:37.034454 | orchestrator | 2026-04-05 02:59:37.034473 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 02:59:37.034492 | orchestrator | Sunday 05 April 2026 02:59:33 +0000 (0:00:00.665) 0:00:31.924 ********** 2026-04-05 02:59:37.034507 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:37.034518 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:37.034528 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:37.034547 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:37.034564 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:37.034580 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:37.034598 | orchestrator | 2026-04-05 02:59:37.034615 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 02:59:37.034633 | orchestrator | Sunday 05 April 2026 02:59:34 +0000 (0:00:01.014) 0:00:32.939 ********** 2026-04-05 02:59:37.034649 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 02:59:37.034667 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 02:59:37.034686 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 02:59:37.034705 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 02:59:37.034723 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 02:59:37.034740 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 02:59:37.034759 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 02:59:37.034777 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 02:59:37.034796 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-05 02:59:37.034815 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 02:59:37.034834 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 02:59:37.034846 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 02:59:37.034856 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-05 02:59:37.034867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 02:59:37.034878 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 02:59:37.034888 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-05 02:59:37.034899 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-05 02:59:37.034919 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 02:59:37.034930 | orchestrator | 2026-04-05 02:59:37.034941 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 02:59:37.034952 | orchestrator | Sunday 05 April 2026 02:59:36 +0000 (0:00:01.691) 0:00:34.631 ********** 2026-04-05 02:59:37.034963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 02:59:37.034974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 02:59:37.034985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 02:59:37.034996 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:37.035020 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 02:59:52.501867 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 02:59:52.502862 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 02:59:52.502897 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:52.502910 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 02:59:52.502921 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 02:59:52.502932 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 02:59:52.502943 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:52.502954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 02:59:52.502964 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 02:59:52.503000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 02:59:52.503063 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:52.503075 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 02:59:52.503086 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 02:59:52.503097 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 02:59:52.503108 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:52.503119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 02:59:52.503129 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 02:59:52.503140 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 02:59:52.503150 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:52.503162 | orchestrator | 2026-04-05 02:59:52.503174 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 02:59:52.503187 | orchestrator | Sunday 05 April 2026 02:59:37 +0000 (0:00:00.965) 0:00:35.596 ********** 2026-04-05 02:59:52.503198 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:52.503222 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:52.503234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:52.503255 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 02:59:52.503267 | orchestrator | 2026-04-05 02:59:52.503278 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 02:59:52.503290 | orchestrator | Sunday 05 April 2026 02:59:38 +0000 (0:00:01.172) 0:00:36.769 ********** 2026-04-05 02:59:52.503301 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:52.503313 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:52.503324 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:52.503335 | orchestrator | 2026-04-05 02:59:52.503346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 02:59:52.503357 | orchestrator | Sunday 05 April 2026 02:59:39 +0000 (0:00:00.355) 0:00:37.125 ********** 2026-04-05 02:59:52.503367 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:52.503378 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:52.503389 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:52.503400 | orchestrator | 2026-04-05 02:59:52.503411 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 02:59:52.503446 | orchestrator | Sunday 05 April 2026 02:59:39 +0000 (0:00:00.367) 0:00:37.492 ********** 2026-04-05 02:59:52.503459 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:52.503470 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:52.503481 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:52.503491 | orchestrator | 2026-04-05 02:59:52.503502 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 02:59:52.503513 | orchestrator | Sunday 05 April 2026 02:59:40 +0000 (0:00:00.618) 0:00:38.110 ********** 2026-04-05 02:59:52.503523 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:52.503536 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:52.503546 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:52.503557 | orchestrator | 2026-04-05 02:59:52.503568 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 02:59:52.503579 | orchestrator | Sunday 05 April 2026 02:59:40 +0000 (0:00:00.502) 0:00:38.613 ********** 2026-04-05 02:59:52.503590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 02:59:52.503601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 02:59:52.503612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 02:59:52.503623 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:52.503634 | orchestrator | 2026-04-05 02:59:52.503645 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 02:59:52.503665 | orchestrator | Sunday 05 April 2026 02:59:41 +0000 (0:00:00.453) 0:00:39.066 ********** 2026-04-05 02:59:52.503675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 02:59:52.503686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 02:59:52.503697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 02:59:52.503708 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:52.503718 | orchestrator | 2026-04-05 02:59:52.503729 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 02:59:52.503740 | orchestrator | Sunday 05 April 2026 02:59:41 +0000 (0:00:00.445) 0:00:39.512 ********** 2026-04-05 02:59:52.503765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 02:59:52.503776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 02:59:52.503787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 02:59:52.503798 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:52.503808 | orchestrator | 2026-04-05 02:59:52.503819 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 02:59:52.503830 | orchestrator | Sunday 05 April 2026 02:59:41 +0000 (0:00:00.429) 0:00:39.941 ********** 2026-04-05 02:59:52.503841 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:52.503852 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:52.503863 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:52.503873 | orchestrator | 2026-04-05 02:59:52.503905 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 02:59:52.503916 | orchestrator | Sunday 05 April 2026 02:59:42 +0000 (0:00:00.354) 0:00:40.295 ********** 2026-04-05 02:59:52.503927 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 02:59:52.503938 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 02:59:52.503949 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 02:59:52.503960 | orchestrator | 2026-04-05 02:59:52.503970 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 02:59:52.503981 | orchestrator | Sunday 05 April 2026 02:59:43 +0000 (0:00:01.174) 0:00:41.470 ********** 2026-04-05 02:59:52.503992 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 02:59:52.504003 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 02:59:52.504014 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 02:59:52.504025 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 02:59:52.504036 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 02:59:52.504046 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 02:59:52.504057 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 02:59:52.504067 | orchestrator | 2026-04-05 02:59:52.504078 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 02:59:52.504089 | orchestrator | Sunday 05 April 2026 02:59:44 +0000 (0:00:00.816) 0:00:42.287 ********** 2026-04-05 02:59:52.504099 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 02:59:52.504110 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 02:59:52.504225 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 02:59:52.504237 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 02:59:52.504248 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 02:59:52.504258 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 02:59:52.504269 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 02:59:52.504279 | orchestrator | 2026-04-05 02:59:52.504290 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 02:59:52.504308 | orchestrator | Sunday 05 April 2026 02:59:46 +0000 (0:00:01.991) 0:00:44.278 ********** 2026-04-05 02:59:52.504320 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:59:52.504332 | orchestrator | 2026-04-05 02:59:52.504342 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 02:59:52.504353 | orchestrator | Sunday 05 April 2026 02:59:47 +0000 (0:00:01.299) 0:00:45.577 ********** 2026-04-05 02:59:52.504364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 02:59:52.504374 | orchestrator | 2026-04-05 02:59:52.504385 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 02:59:52.504395 | orchestrator | Sunday 05 April 2026 02:59:48 +0000 (0:00:01.303) 0:00:46.881 ********** 2026-04-05 02:59:52.504406 | orchestrator | skipping: [testbed-node-3] 2026-04-05 02:59:52.504417 | orchestrator | skipping: [testbed-node-4] 2026-04-05 02:59:52.504569 | orchestrator | skipping: [testbed-node-5] 2026-04-05 02:59:52.504582 | orchestrator | ok: [testbed-node-0] 2026-04-05 02:59:52.504593 | orchestrator | ok: [testbed-node-1] 2026-04-05 02:59:52.504604 | orchestrator | ok: [testbed-node-2] 2026-04-05 02:59:52.504614 | orchestrator | 2026-04-05 02:59:52.504625 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 02:59:52.504636 | orchestrator | Sunday 05 April 2026 02:59:50 +0000 (0:00:01.253) 0:00:48.134 ********** 2026-04-05 02:59:52.504647 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:52.504658 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:52.504668 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:52.504679 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:52.504690 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:52.504700 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:52.504711 | orchestrator | 2026-04-05 02:59:52.504721 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 02:59:52.504732 | orchestrator | Sunday 05 April 2026 02:59:50 +0000 (0:00:00.732) 0:00:48.866 ********** 2026-04-05 02:59:52.504743 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:52.504753 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:52.504764 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:52.504775 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:52.504785 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:52.504796 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:52.504807 | orchestrator | 2026-04-05 02:59:52.504825 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 02:59:52.504836 | orchestrator | Sunday 05 April 2026 02:59:51 +0000 (0:00:00.912) 0:00:49.779 ********** 2026-04-05 02:59:52.504847 | orchestrator | skipping: [testbed-node-0] 2026-04-05 02:59:52.504857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 02:59:52.504868 | orchestrator | ok: [testbed-node-3] 2026-04-05 02:59:52.504879 | orchestrator | skipping: [testbed-node-2] 2026-04-05 02:59:52.504889 | orchestrator | ok: [testbed-node-4] 2026-04-05 02:59:52.504900 | orchestrator | ok: [testbed-node-5] 2026-04-05 02:59:52.504910 | orchestrator | 2026-04-05 02:59:52.504921 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 02:59:52.504942 | orchestrator | Sunday 05 April 2026 02:59:52 +0000 (0:00:00.762) 0:00:50.542 ********** 2026-04-05 03:00:14.979798 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.979913 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.979930 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.979943 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:00:14.979957 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:00:14.979968 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:00:14.979980 | orchestrator | 2026-04-05 03:00:14.979992 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 03:00:14.980029 | orchestrator | Sunday 05 April 2026 02:59:53 +0000 (0:00:01.277) 0:00:51.820 ********** 2026-04-05 03:00:14.980040 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.980052 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.980062 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.980073 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.980084 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.980095 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.980105 | orchestrator | 2026-04-05 03:00:14.980117 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 03:00:14.980128 | orchestrator | Sunday 05 April 2026 02:59:54 +0000 (0:00:00.662) 0:00:52.482 ********** 2026-04-05 03:00:14.980139 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.980150 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.980161 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.980172 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.980182 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.980193 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.980204 | orchestrator | 2026-04-05 03:00:14.980231 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 03:00:14.980253 | orchestrator | Sunday 05 April 2026 02:59:55 +0000 (0:00:00.900) 0:00:53.383 ********** 2026-04-05 03:00:14.980264 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:00:14.980275 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:00:14.980285 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:00:14.980296 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:00:14.980306 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:00:14.980317 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:00:14.980328 | orchestrator | 2026-04-05 03:00:14.980339 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 03:00:14.980349 | orchestrator | Sunday 05 April 2026 02:59:56 +0000 (0:00:01.131) 0:00:54.515 ********** 2026-04-05 03:00:14.980360 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:00:14.980371 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:00:14.980381 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:00:14.980393 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:00:14.980410 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:00:14.980510 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:00:14.980522 | orchestrator | 2026-04-05 03:00:14.980533 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 03:00:14.980544 | orchestrator | Sunday 05 April 2026 02:59:57 +0000 (0:00:01.397) 0:00:55.912 ********** 2026-04-05 03:00:14.980555 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.980566 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.980577 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.980589 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.980599 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.980610 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.980621 | orchestrator | 2026-04-05 03:00:14.980632 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 03:00:14.980643 | orchestrator | Sunday 05 April 2026 02:59:58 +0000 (0:00:00.641) 0:00:56.554 ********** 2026-04-05 03:00:14.980653 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.980664 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.980675 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.980686 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:00:14.980697 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:00:14.980707 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:00:14.980718 | orchestrator | 2026-04-05 03:00:14.980729 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 03:00:14.980740 | orchestrator | Sunday 05 April 2026 02:59:59 +0000 (0:00:00.874) 0:00:57.428 ********** 2026-04-05 03:00:14.980751 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:00:14.980761 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:00:14.980781 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:00:14.980792 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.980803 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.980814 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.980849 | orchestrator | 2026-04-05 03:00:14.980872 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 03:00:14.980883 | orchestrator | Sunday 05 April 2026 02:59:59 +0000 (0:00:00.623) 0:00:58.052 ********** 2026-04-05 03:00:14.980914 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:00:14.980926 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:00:14.980936 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:00:14.980947 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.980958 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.980969 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.980980 | orchestrator | 2026-04-05 03:00:14.980990 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 03:00:14.981001 | orchestrator | Sunday 05 April 2026 03:00:00 +0000 (0:00:00.903) 0:00:58.955 ********** 2026-04-05 03:00:14.981012 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:00:14.981022 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:00:14.981033 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:00:14.981044 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.981055 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.981081 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.981092 | orchestrator | 2026-04-05 03:00:14.981103 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 03:00:14.981114 | orchestrator | Sunday 05 April 2026 03:00:01 +0000 (0:00:00.648) 0:00:59.604 ********** 2026-04-05 03:00:14.981125 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.981135 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.981146 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.981157 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.981167 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.981178 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.981189 | orchestrator | 2026-04-05 03:00:14.981200 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 03:00:14.981230 | orchestrator | Sunday 05 April 2026 03:00:02 +0000 (0:00:00.933) 0:01:00.537 ********** 2026-04-05 03:00:14.981242 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.981253 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.981264 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.981274 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.981285 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.981295 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.981306 | orchestrator | 2026-04-05 03:00:14.981317 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 03:00:14.981328 | orchestrator | Sunday 05 April 2026 03:00:03 +0000 (0:00:00.721) 0:01:01.259 ********** 2026-04-05 03:00:14.981339 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.981349 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.981360 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.981371 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:00:14.981382 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:00:14.981392 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:00:14.981403 | orchestrator | 2026-04-05 03:00:14.981438 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 03:00:14.981450 | orchestrator | Sunday 05 April 2026 03:00:04 +0000 (0:00:00.939) 0:01:02.199 ********** 2026-04-05 03:00:14.981461 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:00:14.981472 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:00:14.981482 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:00:14.981493 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:00:14.981503 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:00:14.981514 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:00:14.981533 | orchestrator | 2026-04-05 03:00:14.981544 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 03:00:14.981555 | orchestrator | Sunday 05 April 2026 03:00:04 +0000 (0:00:00.749) 0:01:02.949 ********** 2026-04-05 03:00:14.981566 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:00:14.981577 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:00:14.981587 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:00:14.981598 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:00:14.981608 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:00:14.981619 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:00:14.981629 | orchestrator | 2026-04-05 03:00:14.981640 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 03:00:14.981651 | orchestrator | Sunday 05 April 2026 03:00:06 +0000 (0:00:01.388) 0:01:04.337 ********** 2026-04-05 03:00:14.981662 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:00:14.981673 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:00:14.981683 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:00:14.981694 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:00:14.981705 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:00:14.981715 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:00:14.981726 | orchestrator | 2026-04-05 03:00:14.981737 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 03:00:14.981747 | orchestrator | Sunday 05 April 2026 03:00:08 +0000 (0:00:01.897) 0:01:06.236 ********** 2026-04-05 03:00:14.981758 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:00:14.981769 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:00:14.981780 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:00:14.981791 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:00:14.981801 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:00:14.981812 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:00:14.981823 | orchestrator | 2026-04-05 03:00:14.981833 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 03:00:14.981844 | orchestrator | Sunday 05 April 2026 03:00:10 +0000 (0:00:02.439) 0:01:08.675 ********** 2026-04-05 03:00:14.981856 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:00:14.981869 | orchestrator | 2026-04-05 03:00:14.981880 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 03:00:14.981891 | orchestrator | Sunday 05 April 2026 03:00:11 +0000 (0:00:01.331) 0:01:10.007 ********** 2026-04-05 03:00:14.981902 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.981912 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.981923 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.981934 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.981944 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.981955 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.981965 | orchestrator | 2026-04-05 03:00:14.981976 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 03:00:14.981987 | orchestrator | Sunday 05 April 2026 03:00:12 +0000 (0:00:00.654) 0:01:10.662 ********** 2026-04-05 03:00:14.981998 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:00:14.982009 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:00:14.982085 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:00:14.982105 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:00:14.982124 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:00:14.982141 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:00:14.982159 | orchestrator | 2026-04-05 03:00:14.982176 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 03:00:14.982193 | orchestrator | Sunday 05 April 2026 03:00:13 +0000 (0:00:00.920) 0:01:11.582 ********** 2026-04-05 03:00:14.982211 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 03:00:14.982239 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 03:00:14.982269 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 03:00:14.982287 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 03:00:14.982306 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 03:00:14.982326 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 03:00:14.982348 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 03:00:14.982368 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 03:00:14.982403 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 03:01:30.787681 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 03:01:30.787781 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 03:01:30.787792 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 03:01:30.787800 | orchestrator | 2026-04-05 03:01:30.787808 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 03:01:30.787816 | orchestrator | Sunday 05 April 2026 03:00:14 +0000 (0:00:01.438) 0:01:13.020 ********** 2026-04-05 03:01:30.787823 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:01:30.787831 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:01:30.787838 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:01:30.787845 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:01:30.787851 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:01:30.787858 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:01:30.787865 | orchestrator | 2026-04-05 03:01:30.787872 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 03:01:30.787878 | orchestrator | Sunday 05 April 2026 03:00:16 +0000 (0:00:01.265) 0:01:14.286 ********** 2026-04-05 03:01:30.787885 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.787892 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.787898 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.787906 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.787913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.787919 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.787925 | orchestrator | 2026-04-05 03:01:30.787931 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 03:01:30.787936 | orchestrator | Sunday 05 April 2026 03:00:16 +0000 (0:00:00.672) 0:01:14.959 ********** 2026-04-05 03:01:30.787942 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.787948 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.787955 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.787961 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.787967 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.787973 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.787980 | orchestrator | 2026-04-05 03:01:30.787987 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 03:01:30.787993 | orchestrator | Sunday 05 April 2026 03:00:17 +0000 (0:00:00.894) 0:01:15.853 ********** 2026-04-05 03:01:30.788000 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788007 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788013 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788019 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788025 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788031 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788036 | orchestrator | 2026-04-05 03:01:30.788042 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 03:01:30.788048 | orchestrator | Sunday 05 April 2026 03:00:18 +0000 (0:00:00.629) 0:01:16.482 ********** 2026-04-05 03:01:30.788077 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:01:30.788085 | orchestrator | 2026-04-05 03:01:30.788091 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 03:01:30.788097 | orchestrator | Sunday 05 April 2026 03:00:19 +0000 (0:00:01.312) 0:01:17.795 ********** 2026-04-05 03:01:30.788103 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:30.788109 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:30.788115 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:30.788121 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:01:30.788127 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:01:30.788133 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:01:30.788139 | orchestrator | 2026-04-05 03:01:30.788146 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 03:01:30.788152 | orchestrator | Sunday 05 April 2026 03:01:19 +0000 (0:00:59.376) 0:02:17.171 ********** 2026-04-05 03:01:30.788158 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 03:01:30.788165 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 03:01:30.788171 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 03:01:30.788177 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788183 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 03:01:30.788190 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 03:01:30.788196 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 03:01:30.788202 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788209 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 03:01:30.788214 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 03:01:30.788235 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 03:01:30.788241 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788247 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 03:01:30.788253 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 03:01:30.788258 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 03:01:30.788265 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788271 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 03:01:30.788277 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 03:01:30.788300 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 03:01:30.788307 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788313 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 03:01:30.788319 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 03:01:30.788326 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 03:01:30.788332 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788339 | orchestrator | 2026-04-05 03:01:30.788345 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 03:01:30.788351 | orchestrator | Sunday 05 April 2026 03:01:19 +0000 (0:00:00.717) 0:02:17.888 ********** 2026-04-05 03:01:30.788357 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788362 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788369 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788376 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788383 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788398 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788449 | orchestrator | 2026-04-05 03:01:30.788455 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 03:01:30.788461 | orchestrator | Sunday 05 April 2026 03:01:20 +0000 (0:00:01.031) 0:02:18.920 ********** 2026-04-05 03:01:30.788468 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788474 | orchestrator | 2026-04-05 03:01:30.788480 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 03:01:30.788486 | orchestrator | Sunday 05 April 2026 03:01:21 +0000 (0:00:00.175) 0:02:19.095 ********** 2026-04-05 03:01:30.788491 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788497 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788504 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788510 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788516 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788522 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788529 | orchestrator | 2026-04-05 03:01:30.788536 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 03:01:30.788543 | orchestrator | Sunday 05 April 2026 03:01:21 +0000 (0:00:00.669) 0:02:19.765 ********** 2026-04-05 03:01:30.788549 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788555 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788561 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788567 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788574 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788581 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788587 | orchestrator | 2026-04-05 03:01:30.788594 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 03:01:30.788600 | orchestrator | Sunday 05 April 2026 03:01:22 +0000 (0:00:00.880) 0:02:20.645 ********** 2026-04-05 03:01:30.788606 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788613 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788620 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788627 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788633 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788640 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788647 | orchestrator | 2026-04-05 03:01:30.788653 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 03:01:30.788660 | orchestrator | Sunday 05 April 2026 03:01:23 +0000 (0:00:00.669) 0:02:21.315 ********** 2026-04-05 03:01:30.788666 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:30.788672 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:30.788679 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:30.788686 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:01:30.788692 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:01:30.788698 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:01:30.788704 | orchestrator | 2026-04-05 03:01:30.788710 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 03:01:30.788716 | orchestrator | Sunday 05 April 2026 03:01:26 +0000 (0:00:03.484) 0:02:24.799 ********** 2026-04-05 03:01:30.788723 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:30.788729 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:30.788735 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:30.788741 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:01:30.788747 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:01:30.788754 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:01:30.788761 | orchestrator | 2026-04-05 03:01:30.788769 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 03:01:30.788776 | orchestrator | Sunday 05 April 2026 03:01:27 +0000 (0:00:00.735) 0:02:25.535 ********** 2026-04-05 03:01:30.788784 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:01:30.788793 | orchestrator | 2026-04-05 03:01:30.788800 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 03:01:30.788814 | orchestrator | Sunday 05 April 2026 03:01:28 +0000 (0:00:01.310) 0:02:26.845 ********** 2026-04-05 03:01:30.788820 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788826 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788832 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788839 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788852 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788859 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788865 | orchestrator | 2026-04-05 03:01:30.788872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 03:01:30.788878 | orchestrator | Sunday 05 April 2026 03:01:29 +0000 (0:00:00.931) 0:02:27.777 ********** 2026-04-05 03:01:30.788884 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:30.788891 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:30.788898 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:30.788904 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:30.788911 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:30.788917 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:30.788924 | orchestrator | 2026-04-05 03:01:30.788931 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 03:01:30.788937 | orchestrator | Sunday 05 April 2026 03:01:30 +0000 (0:00:00.607) 0:02:28.384 ********** 2026-04-05 03:01:30.788952 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:44.967945 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:44.968054 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:44.968069 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:44.968080 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:44.968091 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:44.968103 | orchestrator | 2026-04-05 03:01:44.968115 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 03:01:44.968128 | orchestrator | Sunday 05 April 2026 03:01:31 +0000 (0:00:00.944) 0:02:29.328 ********** 2026-04-05 03:01:44.968139 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:44.968150 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:44.968161 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:44.968173 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:44.968184 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:44.968195 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:44.968206 | orchestrator | 2026-04-05 03:01:44.968218 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 03:01:44.968229 | orchestrator | Sunday 05 April 2026 03:01:31 +0000 (0:00:00.671) 0:02:30.000 ********** 2026-04-05 03:01:44.968240 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:44.968252 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:44.968263 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:44.968274 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:44.968285 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:44.968296 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:44.968306 | orchestrator | 2026-04-05 03:01:44.968317 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 03:01:44.968329 | orchestrator | Sunday 05 April 2026 03:01:32 +0000 (0:00:00.891) 0:02:30.891 ********** 2026-04-05 03:01:44.968339 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:44.968350 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:44.968361 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:44.968373 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:44.968384 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:44.968394 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:44.968432 | orchestrator | 2026-04-05 03:01:44.968443 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 03:01:44.968454 | orchestrator | Sunday 05 April 2026 03:01:33 +0000 (0:00:00.691) 0:02:31.583 ********** 2026-04-05 03:01:44.968490 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:44.968501 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:44.968512 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:44.968522 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:44.968532 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:44.968543 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:44.968553 | orchestrator | 2026-04-05 03:01:44.968564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 03:01:44.968574 | orchestrator | Sunday 05 April 2026 03:01:34 +0000 (0:00:01.013) 0:02:32.597 ********** 2026-04-05 03:01:44.968585 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:44.968595 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:44.968605 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:44.968615 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:44.968625 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:44.968636 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:44.968646 | orchestrator | 2026-04-05 03:01:44.968656 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 03:01:44.968667 | orchestrator | Sunday 05 April 2026 03:01:35 +0000 (0:00:00.696) 0:02:33.293 ********** 2026-04-05 03:01:44.968677 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:44.968689 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:44.968699 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:44.968709 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:01:44.968719 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:01:44.968730 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:01:44.968740 | orchestrator | 2026-04-05 03:01:44.968750 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 03:01:44.968761 | orchestrator | Sunday 05 April 2026 03:01:36 +0000 (0:00:01.413) 0:02:34.707 ********** 2026-04-05 03:01:44.968772 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:01:44.968784 | orchestrator | 2026-04-05 03:01:44.968795 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 03:01:44.968806 | orchestrator | Sunday 05 April 2026 03:01:38 +0000 (0:00:01.501) 0:02:36.208 ********** 2026-04-05 03:01:44.968816 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-05 03:01:44.968827 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-05 03:01:44.968837 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-05 03:01:44.968847 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-05 03:01:44.968858 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-05 03:01:44.968868 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-05 03:01:44.968879 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-05 03:01:44.968902 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-05 03:01:44.968913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-05 03:01:44.968923 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-05 03:01:44.968934 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-05 03:01:44.968944 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-05 03:01:44.968954 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-05 03:01:44.968965 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-05 03:01:44.968976 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-05 03:01:44.968987 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-05 03:01:44.968999 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-05 03:01:44.969027 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-05 03:01:44.969038 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-05 03:01:44.969056 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-05 03:01:44.969068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-05 03:01:44.969078 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-05 03:01:44.969089 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-05 03:01:44.969100 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-05 03:01:44.969111 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-05 03:01:44.969122 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-05 03:01:44.969133 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-05 03:01:44.969144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-05 03:01:44.969155 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-05 03:01:44.969166 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-05 03:01:44.969177 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-05 03:01:44.969187 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-05 03:01:44.969198 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-05 03:01:44.969210 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-05 03:01:44.969221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-05 03:01:44.969232 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-05 03:01:44.969242 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-05 03:01:44.969253 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-05 03:01:44.969264 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-05 03:01:44.969275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-05 03:01:44.969286 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-05 03:01:44.969297 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-05 03:01:44.969308 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-05 03:01:44.969318 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-05 03:01:44.969329 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-05 03:01:44.969340 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-05 03:01:44.969352 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 03:01:44.969363 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 03:01:44.969374 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 03:01:44.969385 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-05 03:01:44.969395 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 03:01:44.969479 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 03:01:44.969490 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-05 03:01:44.969499 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 03:01:44.969509 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 03:01:44.969519 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 03:01:44.969529 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 03:01:44.969539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 03:01:44.969550 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 03:01:44.969560 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 03:01:44.969570 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 03:01:44.969590 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 03:01:44.969601 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 03:01:44.969612 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 03:01:44.969623 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 03:01:44.969634 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 03:01:44.969644 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 03:01:44.969663 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 03:01:44.969674 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 03:01:44.969685 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 03:01:44.969696 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 03:01:44.969706 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 03:01:44.969717 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 03:01:44.969728 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 03:01:44.969739 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 03:01:44.969760 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 03:01:59.982809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 03:01:59.982933 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 03:01:59.982948 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 03:01:59.982962 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 03:01:59.982977 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 03:01:59.982993 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-05 03:01:59.983010 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-05 03:01:59.983025 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-05 03:01:59.983041 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 03:01:59.983057 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 03:01:59.983072 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-05 03:01:59.983088 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-05 03:01:59.983103 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-05 03:01:59.983113 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-05 03:01:59.983122 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-05 03:01:59.983131 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 03:01:59.983146 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-05 03:01:59.983161 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-05 03:01:59.983176 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-05 03:01:59.983191 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-05 03:01:59.983206 | orchestrator | 2026-04-05 03:01:59.983222 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 03:01:59.983232 | orchestrator | Sunday 05 April 2026 03:01:44 +0000 (0:00:06.762) 0:02:42.970 ********** 2026-04-05 03:01:59.983241 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.983250 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.983259 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.983273 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:01:59.983320 | orchestrator | 2026-04-05 03:01:59.983337 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 03:01:59.983354 | orchestrator | Sunday 05 April 2026 03:01:45 +0000 (0:00:01.085) 0:02:44.055 ********** 2026-04-05 03:01:59.983368 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.983380 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.983391 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.983449 | orchestrator | 2026-04-05 03:01:59.983460 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 03:01:59.983471 | orchestrator | Sunday 05 April 2026 03:01:46 +0000 (0:00:00.719) 0:02:44.775 ********** 2026-04-05 03:01:59.983482 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.983492 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.983503 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.983514 | orchestrator | 2026-04-05 03:01:59.983525 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 03:01:59.983535 | orchestrator | Sunday 05 April 2026 03:01:47 +0000 (0:00:01.220) 0:02:45.995 ********** 2026-04-05 03:01:59.983546 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:59.983556 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:59.983567 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:59.983577 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.983588 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.983598 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.983609 | orchestrator | 2026-04-05 03:01:59.983619 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 03:01:59.983717 | orchestrator | Sunday 05 April 2026 03:01:48 +0000 (0:00:00.903) 0:02:46.899 ********** 2026-04-05 03:01:59.983762 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:59.983771 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:59.983780 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:59.983789 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.983797 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.983806 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.983815 | orchestrator | 2026-04-05 03:01:59.983823 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 03:01:59.983832 | orchestrator | Sunday 05 April 2026 03:01:49 +0000 (0:00:00.668) 0:02:47.568 ********** 2026-04-05 03:01:59.983841 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:59.983850 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:59.983858 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:59.983870 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.983885 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.983899 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.983914 | orchestrator | 2026-04-05 03:01:59.983951 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 03:01:59.983961 | orchestrator | Sunday 05 April 2026 03:01:50 +0000 (0:00:00.886) 0:02:48.455 ********** 2026-04-05 03:01:59.983971 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:59.983979 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:59.983989 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:59.983997 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984006 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984015 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984034 | orchestrator | 2026-04-05 03:01:59.984043 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 03:01:59.984052 | orchestrator | Sunday 05 April 2026 03:01:51 +0000 (0:00:00.627) 0:02:49.082 ********** 2026-04-05 03:01:59.984061 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:59.984070 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:59.984079 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:59.984088 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984096 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984105 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984114 | orchestrator | 2026-04-05 03:01:59.984123 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 03:01:59.984132 | orchestrator | Sunday 05 April 2026 03:01:51 +0000 (0:00:00.905) 0:02:49.988 ********** 2026-04-05 03:01:59.984141 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:59.984150 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:59.984159 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:59.984167 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984176 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984185 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984194 | orchestrator | 2026-04-05 03:01:59.984202 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 03:01:59.984211 | orchestrator | Sunday 05 April 2026 03:01:52 +0000 (0:00:00.642) 0:02:50.630 ********** 2026-04-05 03:01:59.984220 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:59.984229 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:59.984238 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:59.984247 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984255 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984264 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984273 | orchestrator | 2026-04-05 03:01:59.984282 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 03:01:59.984291 | orchestrator | Sunday 05 April 2026 03:01:53 +0000 (0:00:00.908) 0:02:51.539 ********** 2026-04-05 03:01:59.984300 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:59.984308 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:59.984317 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:59.984325 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984334 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984343 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984352 | orchestrator | 2026-04-05 03:01:59.984361 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 03:01:59.984369 | orchestrator | Sunday 05 April 2026 03:01:54 +0000 (0:00:00.632) 0:02:52.172 ********** 2026-04-05 03:01:59.984378 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984387 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984423 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984436 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:59.984445 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:59.984454 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:59.984463 | orchestrator | 2026-04-05 03:01:59.984472 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 03:01:59.984480 | orchestrator | Sunday 05 April 2026 03:01:57 +0000 (0:00:03.019) 0:02:55.191 ********** 2026-04-05 03:01:59.984489 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:59.984498 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:59.984507 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:59.984516 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984525 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984533 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984542 | orchestrator | 2026-04-05 03:01:59.984551 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 03:01:59.984574 | orchestrator | Sunday 05 April 2026 03:01:57 +0000 (0:00:00.661) 0:02:55.852 ********** 2026-04-05 03:01:59.984589 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:01:59.984601 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:01:59.984615 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:01:59.984629 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984643 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984670 | orchestrator | 2026-04-05 03:01:59.984684 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 03:01:59.984699 | orchestrator | Sunday 05 April 2026 03:01:58 +0000 (0:00:00.943) 0:02:56.796 ********** 2026-04-05 03:01:59.984713 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:01:59.984728 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:01:59.984751 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:01:59.984767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:01:59.984778 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:01:59.984787 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:01:59.984796 | orchestrator | 2026-04-05 03:01:59.984805 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 03:01:59.984813 | orchestrator | Sunday 05 April 2026 03:01:59 +0000 (0:00:00.627) 0:02:57.423 ********** 2026-04-05 03:01:59.984822 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.984831 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 03:01:59.984850 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 03:02:14.922223 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.922329 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.922343 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.922353 | orchestrator | 2026-04-05 03:02:14.922364 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 03:02:14.922374 | orchestrator | Sunday 05 April 2026 03:02:00 +0000 (0:00:00.947) 0:02:58.371 ********** 2026-04-05 03:02:14.922386 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-05 03:02:14.922428 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-05 03:02:14.922440 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.922450 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-05 03:02:14.922460 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-05 03:02:14.922469 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:14.922478 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-05 03:02:14.922510 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-05 03:02:14.922519 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:14.922528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.922537 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.922546 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.922554 | orchestrator | 2026-04-05 03:02:14.922563 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 03:02:14.922572 | orchestrator | Sunday 05 April 2026 03:02:00 +0000 (0:00:00.685) 0:02:59.057 ********** 2026-04-05 03:02:14.922581 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.922590 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:14.922598 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:14.922607 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.922615 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.922624 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.922633 | orchestrator | 2026-04-05 03:02:14.922641 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 03:02:14.922650 | orchestrator | Sunday 05 April 2026 03:02:01 +0000 (0:00:00.907) 0:02:59.965 ********** 2026-04-05 03:02:14.922659 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.922667 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:14.922676 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:14.922684 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.922693 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.922701 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.922710 | orchestrator | 2026-04-05 03:02:14.922719 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 03:02:14.922729 | orchestrator | Sunday 05 April 2026 03:02:02 +0000 (0:00:00.865) 0:03:00.830 ********** 2026-04-05 03:02:14.922751 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.922760 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:14.922769 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:14.922780 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.922790 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.922801 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.922815 | orchestrator | 2026-04-05 03:02:14.922831 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 03:02:14.922847 | orchestrator | Sunday 05 April 2026 03:02:03 +0000 (0:00:00.691) 0:03:01.522 ********** 2026-04-05 03:02:14.922861 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.922877 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:14.922893 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:14.922908 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.922923 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.922937 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.922950 | orchestrator | 2026-04-05 03:02:14.922975 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 03:02:14.922985 | orchestrator | Sunday 05 April 2026 03:02:04 +0000 (0:00:00.938) 0:03:02.460 ********** 2026-04-05 03:02:14.922994 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.923002 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:14.923011 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:14.923019 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.923028 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.923036 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.923054 | orchestrator | 2026-04-05 03:02:14.923063 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 03:02:14.923072 | orchestrator | Sunday 05 April 2026 03:02:05 +0000 (0:00:00.750) 0:03:03.211 ********** 2026-04-05 03:02:14.923081 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:02:14.923090 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:02:14.923098 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:02:14.923107 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.923116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.923124 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.923133 | orchestrator | 2026-04-05 03:02:14.923142 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 03:02:14.923150 | orchestrator | Sunday 05 April 2026 03:02:06 +0000 (0:00:00.882) 0:03:04.094 ********** 2026-04-05 03:02:14.923159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:02:14.923168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:02:14.923177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:02:14.923186 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.923194 | orchestrator | 2026-04-05 03:02:14.923203 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 03:02:14.923212 | orchestrator | Sunday 05 April 2026 03:02:06 +0000 (0:00:00.514) 0:03:04.608 ********** 2026-04-05 03:02:14.923220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:02:14.923229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:02:14.923237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:02:14.923246 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.923254 | orchestrator | 2026-04-05 03:02:14.923263 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 03:02:14.923272 | orchestrator | Sunday 05 April 2026 03:02:07 +0000 (0:00:00.459) 0:03:05.068 ********** 2026-04-05 03:02:14.923280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:02:14.923302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:02:14.923310 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:02:14.923319 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.923328 | orchestrator | 2026-04-05 03:02:14.923336 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 03:02:14.923345 | orchestrator | Sunday 05 April 2026 03:02:07 +0000 (0:00:00.446) 0:03:05.515 ********** 2026-04-05 03:02:14.923354 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:02:14.923375 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:02:14.923384 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:02:14.923439 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.923449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.923458 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.923467 | orchestrator | 2026-04-05 03:02:14.923475 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 03:02:14.923484 | orchestrator | Sunday 05 April 2026 03:02:08 +0000 (0:00:00.658) 0:03:06.174 ********** 2026-04-05 03:02:14.923493 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 03:02:14.923502 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 03:02:14.923510 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 03:02:14.923519 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-05 03:02:14.923528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:14.923536 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-05 03:02:14.923545 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:14.923553 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-05 03:02:14.923562 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:14.923571 | orchestrator | 2026-04-05 03:02:14.923579 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 03:02:14.923595 | orchestrator | Sunday 05 April 2026 03:02:09 +0000 (0:00:01.814) 0:03:07.988 ********** 2026-04-05 03:02:14.923604 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:02:14.923613 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:02:14.923621 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:02:14.923630 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:02:14.923639 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:02:14.923647 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:02:14.923656 | orchestrator | 2026-04-05 03:02:14.923664 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 03:02:14.923673 | orchestrator | Sunday 05 April 2026 03:02:12 +0000 (0:00:02.774) 0:03:10.762 ********** 2026-04-05 03:02:14.923682 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:02:14.923696 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:02:14.923705 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:02:14.923714 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:02:14.923722 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:02:14.923731 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:02:14.923740 | orchestrator | 2026-04-05 03:02:14.923748 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 03:02:14.923757 | orchestrator | Sunday 05 April 2026 03:02:13 +0000 (0:00:01.030) 0:03:11.793 ********** 2026-04-05 03:02:14.923766 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:14.923774 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:14.923783 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:14.923792 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:02:14.923801 | orchestrator | 2026-04-05 03:02:14.923810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 03:02:14.923826 | orchestrator | Sunday 05 April 2026 03:02:14 +0000 (0:00:01.168) 0:03:12.961 ********** 2026-04-05 03:02:32.475725 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:32.475833 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:32.475847 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:32.475857 | orchestrator | 2026-04-05 03:02:32.475869 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 03:02:32.475880 | orchestrator | Sunday 05 April 2026 03:02:15 +0000 (0:00:00.372) 0:03:13.334 ********** 2026-04-05 03:02:32.475890 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:02:32.475900 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:02:32.475910 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:02:32.475920 | orchestrator | 2026-04-05 03:02:32.475929 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 03:02:32.475939 | orchestrator | Sunday 05 April 2026 03:02:16 +0000 (0:00:01.513) 0:03:14.847 ********** 2026-04-05 03:02:32.475949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 03:02:32.475959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 03:02:32.475969 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 03:02:32.475979 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:32.475988 | orchestrator | 2026-04-05 03:02:32.475998 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 03:02:32.476007 | orchestrator | Sunday 05 April 2026 03:02:17 +0000 (0:00:00.691) 0:03:15.539 ********** 2026-04-05 03:02:32.476017 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:32.476028 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:32.476038 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:32.476047 | orchestrator | 2026-04-05 03:02:32.476057 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 03:02:32.476067 | orchestrator | Sunday 05 April 2026 03:02:17 +0000 (0:00:00.349) 0:03:15.889 ********** 2026-04-05 03:02:32.476077 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:32.476086 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:32.476096 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:32.476130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:02:32.476141 | orchestrator | 2026-04-05 03:02:32.476151 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 03:02:32.476160 | orchestrator | Sunday 05 April 2026 03:02:18 +0000 (0:00:01.120) 0:03:17.010 ********** 2026-04-05 03:02:32.476169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:02:32.476179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:02:32.476189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:02:32.476198 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476208 | orchestrator | 2026-04-05 03:02:32.476218 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 03:02:32.476227 | orchestrator | Sunday 05 April 2026 03:02:19 +0000 (0:00:00.417) 0:03:17.428 ********** 2026-04-05 03:02:32.476236 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476249 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:32.476266 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:32.476291 | orchestrator | 2026-04-05 03:02:32.476309 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 03:02:32.476325 | orchestrator | Sunday 05 April 2026 03:02:19 +0000 (0:00:00.338) 0:03:17.766 ********** 2026-04-05 03:02:32.476341 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476357 | orchestrator | 2026-04-05 03:02:32.476372 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 03:02:32.476416 | orchestrator | Sunday 05 April 2026 03:02:19 +0000 (0:00:00.235) 0:03:18.002 ********** 2026-04-05 03:02:32.476433 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476449 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:32.476466 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:32.476482 | orchestrator | 2026-04-05 03:02:32.476499 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 03:02:32.476513 | orchestrator | Sunday 05 April 2026 03:02:20 +0000 (0:00:00.332) 0:03:18.334 ********** 2026-04-05 03:02:32.476526 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476537 | orchestrator | 2026-04-05 03:02:32.476550 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 03:02:32.476560 | orchestrator | Sunday 05 April 2026 03:02:21 +0000 (0:00:00.759) 0:03:19.094 ********** 2026-04-05 03:02:32.476572 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476583 | orchestrator | 2026-04-05 03:02:32.476595 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 03:02:32.476606 | orchestrator | Sunday 05 April 2026 03:02:21 +0000 (0:00:00.255) 0:03:19.350 ********** 2026-04-05 03:02:32.476618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476629 | orchestrator | 2026-04-05 03:02:32.476639 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 03:02:32.476648 | orchestrator | Sunday 05 April 2026 03:02:21 +0000 (0:00:00.144) 0:03:19.494 ********** 2026-04-05 03:02:32.476674 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476684 | orchestrator | 2026-04-05 03:02:32.476693 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 03:02:32.476703 | orchestrator | Sunday 05 April 2026 03:02:21 +0000 (0:00:00.244) 0:03:19.739 ********** 2026-04-05 03:02:32.476712 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476722 | orchestrator | 2026-04-05 03:02:32.476731 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 03:02:32.476741 | orchestrator | Sunday 05 April 2026 03:02:21 +0000 (0:00:00.251) 0:03:19.990 ********** 2026-04-05 03:02:32.476750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:02:32.476760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:02:32.476770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:02:32.476790 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476800 | orchestrator | 2026-04-05 03:02:32.476809 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 03:02:32.476839 | orchestrator | Sunday 05 April 2026 03:02:22 +0000 (0:00:00.488) 0:03:20.479 ********** 2026-04-05 03:02:32.476850 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476859 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:32.476869 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:32.476878 | orchestrator | 2026-04-05 03:02:32.476888 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 03:02:32.476897 | orchestrator | Sunday 05 April 2026 03:02:22 +0000 (0:00:00.347) 0:03:20.827 ********** 2026-04-05 03:02:32.476907 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476916 | orchestrator | 2026-04-05 03:02:32.476926 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 03:02:32.476935 | orchestrator | Sunday 05 April 2026 03:02:23 +0000 (0:00:00.257) 0:03:21.084 ********** 2026-04-05 03:02:32.476945 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.476954 | orchestrator | 2026-04-05 03:02:32.476964 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 03:02:32.476973 | orchestrator | Sunday 05 April 2026 03:02:23 +0000 (0:00:00.239) 0:03:21.324 ********** 2026-04-05 03:02:32.476982 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:32.476992 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:32.477001 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:32.477011 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:02:32.477021 | orchestrator | 2026-04-05 03:02:32.477030 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 03:02:32.477040 | orchestrator | Sunday 05 April 2026 03:02:24 +0000 (0:00:01.190) 0:03:22.514 ********** 2026-04-05 03:02:32.477049 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:02:32.477059 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:02:32.477068 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:02:32.477078 | orchestrator | 2026-04-05 03:02:32.477087 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 03:02:32.477097 | orchestrator | Sunday 05 April 2026 03:02:24 +0000 (0:00:00.372) 0:03:22.887 ********** 2026-04-05 03:02:32.477106 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:02:32.477116 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:02:32.477125 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:02:32.477135 | orchestrator | 2026-04-05 03:02:32.477144 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 03:02:32.477153 | orchestrator | Sunday 05 April 2026 03:02:26 +0000 (0:00:01.503) 0:03:24.391 ********** 2026-04-05 03:02:32.477163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:02:32.477172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:02:32.477182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:02:32.477191 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.477201 | orchestrator | 2026-04-05 03:02:32.477210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 03:02:32.477220 | orchestrator | Sunday 05 April 2026 03:02:27 +0000 (0:00:00.711) 0:03:25.102 ********** 2026-04-05 03:02:32.477229 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:02:32.477239 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:02:32.477248 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:02:32.477314 | orchestrator | 2026-04-05 03:02:32.477325 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 03:02:32.477335 | orchestrator | Sunday 05 April 2026 03:02:27 +0000 (0:00:00.358) 0:03:25.461 ********** 2026-04-05 03:02:32.477344 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:32.477354 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:32.477364 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:32.477382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:02:32.477420 | orchestrator | 2026-04-05 03:02:32.477437 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 03:02:32.477454 | orchestrator | Sunday 05 April 2026 03:02:28 +0000 (0:00:01.117) 0:03:26.579 ********** 2026-04-05 03:02:32.477464 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:02:32.477474 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:02:32.477484 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:02:32.477493 | orchestrator | 2026-04-05 03:02:32.477503 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 03:02:32.477513 | orchestrator | Sunday 05 April 2026 03:02:28 +0000 (0:00:00.372) 0:03:26.951 ********** 2026-04-05 03:02:32.477522 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:02:32.477532 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:02:32.477541 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:02:32.477550 | orchestrator | 2026-04-05 03:02:32.477560 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 03:02:32.477570 | orchestrator | Sunday 05 April 2026 03:02:30 +0000 (0:00:01.261) 0:03:28.212 ********** 2026-04-05 03:02:32.477579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:02:32.477589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:02:32.477605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:02:32.477615 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.477625 | orchestrator | 2026-04-05 03:02:32.477634 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 03:02:32.477658 | orchestrator | Sunday 05 April 2026 03:02:31 +0000 (0:00:00.979) 0:03:29.192 ********** 2026-04-05 03:02:32.477668 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:02:32.477677 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:02:32.477687 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:02:32.477697 | orchestrator | 2026-04-05 03:02:32.477706 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-05 03:02:32.477716 | orchestrator | Sunday 05 April 2026 03:02:31 +0000 (0:00:00.640) 0:03:29.833 ********** 2026-04-05 03:02:32.477725 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:32.477735 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:32.477745 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:32.477754 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:32.477764 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:32.477782 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.102981 | orchestrator | 2026-04-05 03:02:50.103094 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 03:02:50.103111 | orchestrator | Sunday 05 April 2026 03:02:32 +0000 (0:00:00.688) 0:03:30.521 ********** 2026-04-05 03:02:50.103123 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:02:50.103136 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:02:50.103149 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:02:50.103162 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:02:50.103174 | orchestrator | 2026-04-05 03:02:50.103185 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 03:02:50.103197 | orchestrator | Sunday 05 April 2026 03:02:33 +0000 (0:00:01.147) 0:03:31.668 ********** 2026-04-05 03:02:50.103208 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.103221 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.103232 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.103241 | orchestrator | 2026-04-05 03:02:50.103249 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 03:02:50.103256 | orchestrator | Sunday 05 April 2026 03:02:33 +0000 (0:00:00.381) 0:03:32.050 ********** 2026-04-05 03:02:50.103263 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:02:50.103291 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:02:50.103298 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:02:50.103305 | orchestrator | 2026-04-05 03:02:50.103312 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 03:02:50.103318 | orchestrator | Sunday 05 April 2026 03:02:35 +0000 (0:00:01.269) 0:03:33.320 ********** 2026-04-05 03:02:50.103326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 03:02:50.103334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 03:02:50.103340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 03:02:50.103347 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.103353 | orchestrator | 2026-04-05 03:02:50.103360 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 03:02:50.103366 | orchestrator | Sunday 05 April 2026 03:02:36 +0000 (0:00:01.220) 0:03:34.541 ********** 2026-04-05 03:02:50.103373 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.103380 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.103436 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.103449 | orchestrator | 2026-04-05 03:02:50.103461 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-05 03:02:50.103472 | orchestrator | 2026-04-05 03:02:50.103483 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 03:02:50.103516 | orchestrator | Sunday 05 April 2026 03:02:37 +0000 (0:00:00.797) 0:03:35.339 ********** 2026-04-05 03:02:50.103552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:02:50.103564 | orchestrator | 2026-04-05 03:02:50.103575 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 03:02:50.103586 | orchestrator | Sunday 05 April 2026 03:02:38 +0000 (0:00:00.772) 0:03:36.112 ********** 2026-04-05 03:02:50.103596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:02:50.103608 | orchestrator | 2026-04-05 03:02:50.103620 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 03:02:50.103630 | orchestrator | Sunday 05 April 2026 03:02:38 +0000 (0:00:00.620) 0:03:36.732 ********** 2026-04-05 03:02:50.103641 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.103651 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.103662 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.103674 | orchestrator | 2026-04-05 03:02:50.103686 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 03:02:50.103701 | orchestrator | Sunday 05 April 2026 03:02:39 +0000 (0:00:00.726) 0:03:37.459 ********** 2026-04-05 03:02:50.103713 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.103735 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.103748 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.103769 | orchestrator | 2026-04-05 03:02:50.103780 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 03:02:50.103791 | orchestrator | Sunday 05 April 2026 03:02:40 +0000 (0:00:00.615) 0:03:38.074 ********** 2026-04-05 03:02:50.103801 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.103808 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.103815 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.103822 | orchestrator | 2026-04-05 03:02:50.103829 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 03:02:50.103835 | orchestrator | Sunday 05 April 2026 03:02:40 +0000 (0:00:00.337) 0:03:38.411 ********** 2026-04-05 03:02:50.103842 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.103849 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.103870 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.103877 | orchestrator | 2026-04-05 03:02:50.103883 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 03:02:50.103890 | orchestrator | Sunday 05 April 2026 03:02:40 +0000 (0:00:00.327) 0:03:38.738 ********** 2026-04-05 03:02:50.103906 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.103913 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.103920 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.103926 | orchestrator | 2026-04-05 03:02:50.103933 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 03:02:50.103940 | orchestrator | Sunday 05 April 2026 03:02:41 +0000 (0:00:00.827) 0:03:39.566 ********** 2026-04-05 03:02:50.103947 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.103953 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.103960 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.103966 | orchestrator | 2026-04-05 03:02:50.103974 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 03:02:50.103985 | orchestrator | Sunday 05 April 2026 03:02:42 +0000 (0:00:00.607) 0:03:40.173 ********** 2026-04-05 03:02:50.103996 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104030 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.104042 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.104053 | orchestrator | 2026-04-05 03:02:50.104060 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 03:02:50.104069 | orchestrator | Sunday 05 April 2026 03:02:42 +0000 (0:00:00.337) 0:03:40.511 ********** 2026-04-05 03:02:50.104080 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.104090 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.104101 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.104112 | orchestrator | 2026-04-05 03:02:50.104123 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 03:02:50.104135 | orchestrator | Sunday 05 April 2026 03:02:43 +0000 (0:00:00.751) 0:03:41.262 ********** 2026-04-05 03:02:50.104147 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.104157 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.104168 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.104178 | orchestrator | 2026-04-05 03:02:50.104185 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 03:02:50.104192 | orchestrator | Sunday 05 April 2026 03:02:43 +0000 (0:00:00.766) 0:03:42.029 ********** 2026-04-05 03:02:50.104198 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104205 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.104211 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.104218 | orchestrator | 2026-04-05 03:02:50.104225 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 03:02:50.104231 | orchestrator | Sunday 05 April 2026 03:02:44 +0000 (0:00:00.617) 0:03:42.646 ********** 2026-04-05 03:02:50.104238 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.104245 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.104251 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.104258 | orchestrator | 2026-04-05 03:02:50.104265 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 03:02:50.104271 | orchestrator | Sunday 05 April 2026 03:02:44 +0000 (0:00:00.388) 0:03:43.034 ********** 2026-04-05 03:02:50.104278 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104284 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.104291 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.104297 | orchestrator | 2026-04-05 03:02:50.104304 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 03:02:50.104311 | orchestrator | Sunday 05 April 2026 03:02:45 +0000 (0:00:00.334) 0:03:43.368 ********** 2026-04-05 03:02:50.104317 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104324 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.104330 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.104337 | orchestrator | 2026-04-05 03:02:50.104343 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 03:02:50.104350 | orchestrator | Sunday 05 April 2026 03:02:45 +0000 (0:00:00.319) 0:03:43.688 ********** 2026-04-05 03:02:50.104357 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104370 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.104376 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.104383 | orchestrator | 2026-04-05 03:02:50.104409 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 03:02:50.104416 | orchestrator | Sunday 05 April 2026 03:02:46 +0000 (0:00:00.612) 0:03:44.301 ********** 2026-04-05 03:02:50.104423 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104429 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.104436 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.104443 | orchestrator | 2026-04-05 03:02:50.104449 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 03:02:50.104456 | orchestrator | Sunday 05 April 2026 03:02:46 +0000 (0:00:00.366) 0:03:44.667 ********** 2026-04-05 03:02:50.104462 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104469 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:02:50.104476 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:02:50.104482 | orchestrator | 2026-04-05 03:02:50.104489 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 03:02:50.104496 | orchestrator | Sunday 05 April 2026 03:02:46 +0000 (0:00:00.344) 0:03:45.011 ********** 2026-04-05 03:02:50.104502 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.104509 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.104515 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.104522 | orchestrator | 2026-04-05 03:02:50.104529 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 03:02:50.104535 | orchestrator | Sunday 05 April 2026 03:02:47 +0000 (0:00:00.342) 0:03:45.354 ********** 2026-04-05 03:02:50.104542 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.104549 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.104555 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.104562 | orchestrator | 2026-04-05 03:02:50.104568 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 03:02:50.104575 | orchestrator | Sunday 05 April 2026 03:02:47 +0000 (0:00:00.669) 0:03:46.024 ********** 2026-04-05 03:02:50.104582 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.104588 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.104595 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.104601 | orchestrator | 2026-04-05 03:02:50.104613 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-05 03:02:50.104621 | orchestrator | Sunday 05 April 2026 03:02:48 +0000 (0:00:00.636) 0:03:46.660 ********** 2026-04-05 03:02:50.104632 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:02:50.104643 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:02:50.104653 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:02:50.104664 | orchestrator | 2026-04-05 03:02:50.104676 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-05 03:02:50.104687 | orchestrator | Sunday 05 April 2026 03:02:48 +0000 (0:00:00.349) 0:03:47.010 ********** 2026-04-05 03:02:50.104698 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:02:50.104710 | orchestrator | 2026-04-05 03:02:50.104720 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-05 03:02:50.104727 | orchestrator | Sunday 05 April 2026 03:02:49 +0000 (0:00:00.958) 0:03:47.968 ********** 2026-04-05 03:02:50.104734 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:02:50.104740 | orchestrator | 2026-04-05 03:02:50.104753 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-05 03:03:44.475663 | orchestrator | Sunday 05 April 2026 03:02:50 +0000 (0:00:00.180) 0:03:48.149 ********** 2026-04-05 03:03:44.475781 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 03:03:44.475798 | orchestrator | 2026-04-05 03:03:44.475811 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-05 03:03:44.475822 | orchestrator | Sunday 05 April 2026 03:02:51 +0000 (0:00:01.202) 0:03:49.352 ********** 2026-04-05 03:03:44.475856 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.475867 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:44.475878 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:44.475889 | orchestrator | 2026-04-05 03:03:44.475900 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-05 03:03:44.475911 | orchestrator | Sunday 05 April 2026 03:02:51 +0000 (0:00:00.371) 0:03:49.723 ********** 2026-04-05 03:03:44.475921 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.475932 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:44.475943 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:44.475953 | orchestrator | 2026-04-05 03:03:44.475964 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-05 03:03:44.475975 | orchestrator | Sunday 05 April 2026 03:02:52 +0000 (0:00:00.726) 0:03:50.450 ********** 2026-04-05 03:03:44.475986 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.475998 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.476009 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.476019 | orchestrator | 2026-04-05 03:03:44.476030 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-05 03:03:44.476041 | orchestrator | Sunday 05 April 2026 03:02:53 +0000 (0:00:01.183) 0:03:51.633 ********** 2026-04-05 03:03:44.476052 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.476063 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.476073 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.476084 | orchestrator | 2026-04-05 03:03:44.476095 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-05 03:03:44.476106 | orchestrator | Sunday 05 April 2026 03:02:54 +0000 (0:00:00.846) 0:03:52.479 ********** 2026-04-05 03:03:44.476117 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.476127 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.476138 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.476148 | orchestrator | 2026-04-05 03:03:44.476159 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-05 03:03:44.476170 | orchestrator | Sunday 05 April 2026 03:02:55 +0000 (0:00:00.710) 0:03:53.190 ********** 2026-04-05 03:03:44.476181 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.476192 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:44.476206 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:44.476219 | orchestrator | 2026-04-05 03:03:44.476231 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-05 03:03:44.476244 | orchestrator | Sunday 05 April 2026 03:02:56 +0000 (0:00:01.047) 0:03:54.237 ********** 2026-04-05 03:03:44.476257 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.476271 | orchestrator | 2026-04-05 03:03:44.476284 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-05 03:03:44.476296 | orchestrator | Sunday 05 April 2026 03:02:57 +0000 (0:00:01.532) 0:03:55.770 ********** 2026-04-05 03:03:44.476310 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.476323 | orchestrator | 2026-04-05 03:03:44.476335 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-05 03:03:44.476348 | orchestrator | Sunday 05 April 2026 03:02:58 +0000 (0:00:00.797) 0:03:56.567 ********** 2026-04-05 03:03:44.476361 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:03:44.476374 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:03:44.476412 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:03:44.476425 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:03:44.476437 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-05 03:03:44.476448 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:03:44.476459 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:03:44.476470 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-05 03:03:44.476480 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:03:44.476499 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-05 03:03:44.476577 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-05 03:03:44.476592 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-05 03:03:44.476603 | orchestrator | 2026-04-05 03:03:44.476618 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-05 03:03:44.476637 | orchestrator | Sunday 05 April 2026 03:03:01 +0000 (0:00:03.330) 0:03:59.897 ********** 2026-04-05 03:03:44.476665 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.476684 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.476715 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.476735 | orchestrator | 2026-04-05 03:03:44.476752 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-05 03:03:44.476770 | orchestrator | Sunday 05 April 2026 03:03:03 +0000 (0:00:01.241) 0:04:01.139 ********** 2026-04-05 03:03:44.476790 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.476808 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:44.476827 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:44.476846 | orchestrator | 2026-04-05 03:03:44.476861 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-05 03:03:44.476872 | orchestrator | Sunday 05 April 2026 03:03:03 +0000 (0:00:00.710) 0:04:01.849 ********** 2026-04-05 03:03:44.476883 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.476893 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:44.476904 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:44.476914 | orchestrator | 2026-04-05 03:03:44.476925 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-05 03:03:44.476936 | orchestrator | Sunday 05 April 2026 03:03:04 +0000 (0:00:00.389) 0:04:02.238 ********** 2026-04-05 03:03:44.476946 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.476957 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.476983 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.476995 | orchestrator | 2026-04-05 03:03:44.477006 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-05 03:03:44.477017 | orchestrator | Sunday 05 April 2026 03:03:05 +0000 (0:00:01.480) 0:04:03.719 ********** 2026-04-05 03:03:44.477027 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.477038 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.477049 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.477059 | orchestrator | 2026-04-05 03:03:44.477070 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-05 03:03:44.477081 | orchestrator | Sunday 05 April 2026 03:03:06 +0000 (0:00:01.300) 0:04:05.020 ********** 2026-04-05 03:03:44.477092 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:44.477109 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:44.477127 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:44.477145 | orchestrator | 2026-04-05 03:03:44.477164 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-05 03:03:44.477182 | orchestrator | Sunday 05 April 2026 03:03:07 +0000 (0:00:00.638) 0:04:05.658 ********** 2026-04-05 03:03:44.477200 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:03:44.477218 | orchestrator | 2026-04-05 03:03:44.477236 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-05 03:03:44.477252 | orchestrator | Sunday 05 April 2026 03:03:08 +0000 (0:00:00.578) 0:04:06.236 ********** 2026-04-05 03:03:44.477268 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:44.477287 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:44.477304 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:44.477322 | orchestrator | 2026-04-05 03:03:44.477340 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-05 03:03:44.477358 | orchestrator | Sunday 05 April 2026 03:03:08 +0000 (0:00:00.338) 0:04:06.575 ********** 2026-04-05 03:03:44.477377 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:44.477490 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:44.477509 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:44.477526 | orchestrator | 2026-04-05 03:03:44.477542 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-05 03:03:44.477559 | orchestrator | Sunday 05 April 2026 03:03:09 +0000 (0:00:00.607) 0:04:07.182 ********** 2026-04-05 03:03:44.477575 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:03:44.477594 | orchestrator | 2026-04-05 03:03:44.477611 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-05 03:03:44.477629 | orchestrator | Sunday 05 April 2026 03:03:09 +0000 (0:00:00.586) 0:04:07.769 ********** 2026-04-05 03:03:44.477646 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.477662 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.477679 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.477697 | orchestrator | 2026-04-05 03:03:44.477716 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-05 03:03:44.477733 | orchestrator | Sunday 05 April 2026 03:03:11 +0000 (0:00:01.707) 0:04:09.476 ********** 2026-04-05 03:03:44.477751 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.477769 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.477788 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.477807 | orchestrator | 2026-04-05 03:03:44.477826 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-05 03:03:44.477845 | orchestrator | Sunday 05 April 2026 03:03:12 +0000 (0:00:01.390) 0:04:10.866 ********** 2026-04-05 03:03:44.477865 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.477884 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.477902 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.477914 | orchestrator | 2026-04-05 03:03:44.477924 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-05 03:03:44.477935 | orchestrator | Sunday 05 April 2026 03:03:14 +0000 (0:00:01.714) 0:04:12.582 ********** 2026-04-05 03:03:44.477946 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:03:44.477955 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:03:44.477965 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:03:44.477974 | orchestrator | 2026-04-05 03:03:44.477984 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-05 03:03:44.477993 | orchestrator | Sunday 05 April 2026 03:03:16 +0000 (0:00:01.995) 0:04:14.577 ********** 2026-04-05 03:03:44.478003 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:03:44.478012 | orchestrator | 2026-04-05 03:03:44.478074 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-05 03:03:44.478084 | orchestrator | Sunday 05 April 2026 03:03:17 +0000 (0:00:00.667) 0:04:15.245 ********** 2026-04-05 03:03:44.478094 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.478103 | orchestrator | 2026-04-05 03:03:44.478121 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-05 03:03:44.478131 | orchestrator | Sunday 05 April 2026 03:03:18 +0000 (0:00:01.355) 0:04:16.600 ********** 2026-04-05 03:03:44.478141 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:44.478151 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:44.478160 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:44.478170 | orchestrator | 2026-04-05 03:03:44.478179 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-05 03:03:44.478189 | orchestrator | Sunday 05 April 2026 03:03:28 +0000 (0:00:09.645) 0:04:26.246 ********** 2026-04-05 03:03:44.478199 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:44.478208 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:44.478218 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:44.478227 | orchestrator | 2026-04-05 03:03:44.478237 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-05 03:03:44.478246 | orchestrator | Sunday 05 April 2026 03:03:28 +0000 (0:00:00.371) 0:04:26.617 ********** 2026-04-05 03:03:44.478282 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a27ce06c55ebb3bc10aea80ed51025b011dcb12c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 03:03:56.919565 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a27ce06c55ebb3bc10aea80ed51025b011dcb12c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 03:03:56.919664 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a27ce06c55ebb3bc10aea80ed51025b011dcb12c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 03:03:56.919681 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a27ce06c55ebb3bc10aea80ed51025b011dcb12c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 03:03:56.919696 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a27ce06c55ebb3bc10aea80ed51025b011dcb12c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 03:03:56.919717 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a27ce06c55ebb3bc10aea80ed51025b011dcb12c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a27ce06c55ebb3bc10aea80ed51025b011dcb12c'}])  2026-04-05 03:03:56.919745 | orchestrator | 2026-04-05 03:03:56.919771 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 03:03:56.919791 | orchestrator | Sunday 05 April 2026 03:03:44 +0000 (0:00:15.903) 0:04:42.520 ********** 2026-04-05 03:03:56.919809 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.919828 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.919844 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.919861 | orchestrator | 2026-04-05 03:03:56.919878 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-05 03:03:56.919898 | orchestrator | Sunday 05 April 2026 03:03:44 +0000 (0:00:00.360) 0:04:42.881 ********** 2026-04-05 03:03:56.919915 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:03:56.919932 | orchestrator | 2026-04-05 03:03:56.919966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-05 03:03:56.919984 | orchestrator | Sunday 05 April 2026 03:03:45 +0000 (0:00:00.665) 0:04:43.547 ********** 2026-04-05 03:03:56.920001 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:56.920020 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:56.920035 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:56.920053 | orchestrator | 2026-04-05 03:03:56.920071 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-05 03:03:56.920088 | orchestrator | Sunday 05 April 2026 03:03:45 +0000 (0:00:00.309) 0:04:43.856 ********** 2026-04-05 03:03:56.920136 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.920156 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.920173 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.920192 | orchestrator | 2026-04-05 03:03:56.920227 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-05 03:03:56.920248 | orchestrator | Sunday 05 April 2026 03:03:46 +0000 (0:00:00.313) 0:04:44.170 ********** 2026-04-05 03:03:56.920266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 03:03:56.920286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 03:03:56.920305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 03:03:56.920321 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.920332 | orchestrator | 2026-04-05 03:03:56.920343 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-05 03:03:56.920353 | orchestrator | Sunday 05 April 2026 03:03:47 +0000 (0:00:00.938) 0:04:45.109 ********** 2026-04-05 03:03:56.920364 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:56.920399 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:56.920410 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:56.920421 | orchestrator | 2026-04-05 03:03:56.920431 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-05 03:03:56.920442 | orchestrator | 2026-04-05 03:03:56.920453 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 03:03:56.920464 | orchestrator | Sunday 05 April 2026 03:03:47 +0000 (0:00:00.928) 0:04:46.037 ********** 2026-04-05 03:03:56.920476 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:03:56.920488 | orchestrator | 2026-04-05 03:03:56.920519 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 03:03:56.920530 | orchestrator | Sunday 05 April 2026 03:03:48 +0000 (0:00:00.690) 0:04:46.728 ********** 2026-04-05 03:03:56.920542 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:03:56.920552 | orchestrator | 2026-04-05 03:03:56.920563 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 03:03:56.920574 | orchestrator | Sunday 05 April 2026 03:03:49 +0000 (0:00:00.815) 0:04:47.543 ********** 2026-04-05 03:03:56.920585 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:56.920595 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:56.920606 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:56.920617 | orchestrator | 2026-04-05 03:03:56.920628 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 03:03:56.920638 | orchestrator | Sunday 05 April 2026 03:03:50 +0000 (0:00:00.753) 0:04:48.296 ********** 2026-04-05 03:03:56.920649 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.920660 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.920671 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.920681 | orchestrator | 2026-04-05 03:03:56.920692 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 03:03:56.920703 | orchestrator | Sunday 05 April 2026 03:03:50 +0000 (0:00:00.373) 0:04:48.669 ********** 2026-04-05 03:03:56.920714 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.920724 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.920735 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.920746 | orchestrator | 2026-04-05 03:03:56.920756 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 03:03:56.920767 | orchestrator | Sunday 05 April 2026 03:03:51 +0000 (0:00:00.618) 0:04:49.287 ********** 2026-04-05 03:03:56.920778 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.920788 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.920799 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.920813 | orchestrator | 2026-04-05 03:03:56.920826 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 03:03:56.920850 | orchestrator | Sunday 05 April 2026 03:03:51 +0000 (0:00:00.348) 0:04:49.636 ********** 2026-04-05 03:03:56.920863 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:56.920877 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:56.920889 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:56.920902 | orchestrator | 2026-04-05 03:03:56.920914 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 03:03:56.920924 | orchestrator | Sunday 05 April 2026 03:03:52 +0000 (0:00:00.735) 0:04:50.372 ********** 2026-04-05 03:03:56.920935 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.920946 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.920956 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.920967 | orchestrator | 2026-04-05 03:03:56.920978 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 03:03:56.920989 | orchestrator | Sunday 05 April 2026 03:03:52 +0000 (0:00:00.338) 0:04:50.711 ********** 2026-04-05 03:03:56.920999 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.921010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.921021 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.921032 | orchestrator | 2026-04-05 03:03:56.921042 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 03:03:56.921053 | orchestrator | Sunday 05 April 2026 03:03:53 +0000 (0:00:00.663) 0:04:51.374 ********** 2026-04-05 03:03:56.921064 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:56.921074 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:56.921085 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:56.921096 | orchestrator | 2026-04-05 03:03:56.921107 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 03:03:56.921117 | orchestrator | Sunday 05 April 2026 03:03:54 +0000 (0:00:00.752) 0:04:52.127 ********** 2026-04-05 03:03:56.921128 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:56.921139 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:56.921149 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:56.921160 | orchestrator | 2026-04-05 03:03:56.921171 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 03:03:56.921182 | orchestrator | Sunday 05 April 2026 03:03:54 +0000 (0:00:00.770) 0:04:52.897 ********** 2026-04-05 03:03:56.921193 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.921204 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.921214 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.921225 | orchestrator | 2026-04-05 03:03:56.921242 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 03:03:56.921253 | orchestrator | Sunday 05 April 2026 03:03:55 +0000 (0:00:00.340) 0:04:53.238 ********** 2026-04-05 03:03:56.921264 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:03:56.921274 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:03:56.921285 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:03:56.921296 | orchestrator | 2026-04-05 03:03:56.921306 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 03:03:56.921317 | orchestrator | Sunday 05 April 2026 03:03:55 +0000 (0:00:00.690) 0:04:53.928 ********** 2026-04-05 03:03:56.921328 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.921339 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.921349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.921360 | orchestrator | 2026-04-05 03:03:56.921371 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 03:03:56.921472 | orchestrator | Sunday 05 April 2026 03:03:56 +0000 (0:00:00.359) 0:04:54.287 ********** 2026-04-05 03:03:56.921484 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.921495 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.921505 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.921516 | orchestrator | 2026-04-05 03:03:56.921527 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 03:03:56.921538 | orchestrator | Sunday 05 April 2026 03:03:56 +0000 (0:00:00.342) 0:04:54.630 ********** 2026-04-05 03:03:56.921557 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:03:56.921568 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:03:56.921579 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:03:56.921590 | orchestrator | 2026-04-05 03:03:56.921608 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 03:05:00.478882 | orchestrator | Sunday 05 April 2026 03:03:56 +0000 (0:00:00.334) 0:04:54.965 ********** 2026-04-05 03:05:00.478997 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.479014 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.479025 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:05:00.479035 | orchestrator | 2026-04-05 03:05:00.479046 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 03:05:00.479057 | orchestrator | Sunday 05 April 2026 03:03:57 +0000 (0:00:00.636) 0:04:55.601 ********** 2026-04-05 03:05:00.479067 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.479077 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.479087 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:05:00.479097 | orchestrator | 2026-04-05 03:05:00.479106 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 03:05:00.479116 | orchestrator | Sunday 05 April 2026 03:03:57 +0000 (0:00:00.356) 0:04:55.958 ********** 2026-04-05 03:05:00.479126 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:05:00.479137 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:05:00.479147 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:00.479157 | orchestrator | 2026-04-05 03:05:00.479166 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 03:05:00.479176 | orchestrator | Sunday 05 April 2026 03:03:58 +0000 (0:00:00.347) 0:04:56.306 ********** 2026-04-05 03:05:00.479185 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:05:00.479195 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:05:00.479205 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:00.479215 | orchestrator | 2026-04-05 03:05:00.479225 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 03:05:00.479235 | orchestrator | Sunday 05 April 2026 03:03:58 +0000 (0:00:00.379) 0:04:56.685 ********** 2026-04-05 03:05:00.479244 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:05:00.479257 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:05:00.479274 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:00.479291 | orchestrator | 2026-04-05 03:05:00.479308 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 03:05:00.479325 | orchestrator | Sunday 05 April 2026 03:03:59 +0000 (0:00:00.921) 0:04:57.607 ********** 2026-04-05 03:05:00.479342 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 03:05:00.479441 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 03:05:00.479462 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 03:05:00.479475 | orchestrator | 2026-04-05 03:05:00.479486 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 03:05:00.479498 | orchestrator | Sunday 05 April 2026 03:04:00 +0000 (0:00:00.683) 0:04:58.290 ********** 2026-04-05 03:05:00.479510 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:05:00.479523 | orchestrator | 2026-04-05 03:05:00.479536 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-05 03:05:00.479548 | orchestrator | Sunday 05 April 2026 03:04:01 +0000 (0:00:00.853) 0:04:59.144 ********** 2026-04-05 03:05:00.479561 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:05:00.479573 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:05:00.479585 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:05:00.479597 | orchestrator | 2026-04-05 03:05:00.479608 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-05 03:05:00.479637 | orchestrator | Sunday 05 April 2026 03:04:01 +0000 (0:00:00.765) 0:04:59.909 ********** 2026-04-05 03:05:00.479701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.479719 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.479738 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:05:00.479756 | orchestrator | 2026-04-05 03:05:00.479773 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-05 03:05:00.479788 | orchestrator | Sunday 05 April 2026 03:04:02 +0000 (0:00:00.347) 0:05:00.257 ********** 2026-04-05 03:05:00.479799 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:05:00.479809 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:05:00.479819 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:05:00.479829 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-05 03:05:00.479838 | orchestrator | 2026-04-05 03:05:00.479849 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-05 03:05:00.479873 | orchestrator | Sunday 05 April 2026 03:04:14 +0000 (0:00:11.839) 0:05:12.097 ********** 2026-04-05 03:05:00.479883 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:05:00.479893 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:05:00.479902 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:00.479912 | orchestrator | 2026-04-05 03:05:00.479922 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-05 03:05:00.479931 | orchestrator | Sunday 05 April 2026 03:04:14 +0000 (0:00:00.460) 0:05:12.557 ********** 2026-04-05 03:05:00.479941 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 03:05:00.479951 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 03:05:00.479960 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 03:05:00.479970 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 03:05:00.479980 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:05:00.479990 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:05:00.479999 | orchestrator | 2026-04-05 03:05:00.480009 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-05 03:05:00.480018 | orchestrator | Sunday 05 April 2026 03:04:17 +0000 (0:00:02.896) 0:05:15.454 ********** 2026-04-05 03:05:00.480028 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 03:05:00.480038 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 03:05:00.480047 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 03:05:00.480057 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:05:00.480067 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-05 03:05:00.480096 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-05 03:05:00.480106 | orchestrator | 2026-04-05 03:05:00.480116 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-05 03:05:00.480129 | orchestrator | Sunday 05 April 2026 03:04:18 +0000 (0:00:01.286) 0:05:16.740 ********** 2026-04-05 03:05:00.480145 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:05:00.480164 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:05:00.480188 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:00.480205 | orchestrator | 2026-04-05 03:05:00.480220 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-05 03:05:00.480236 | orchestrator | Sunday 05 April 2026 03:04:19 +0000 (0:00:00.725) 0:05:17.466 ********** 2026-04-05 03:05:00.480251 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.480266 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.480282 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:05:00.480296 | orchestrator | 2026-04-05 03:05:00.480311 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 03:05:00.480325 | orchestrator | Sunday 05 April 2026 03:04:19 +0000 (0:00:00.311) 0:05:17.778 ********** 2026-04-05 03:05:00.480341 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.480385 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.480416 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:05:00.480431 | orchestrator | 2026-04-05 03:05:00.480446 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 03:05:00.480462 | orchestrator | Sunday 05 April 2026 03:04:20 +0000 (0:00:00.509) 0:05:18.288 ********** 2026-04-05 03:05:00.480479 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:05:00.480496 | orchestrator | 2026-04-05 03:05:00.480514 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-05 03:05:00.480530 | orchestrator | Sunday 05 April 2026 03:04:20 +0000 (0:00:00.552) 0:05:18.840 ********** 2026-04-05 03:05:00.480547 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.480564 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.480581 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:05:00.480597 | orchestrator | 2026-04-05 03:05:00.480613 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-05 03:05:00.480628 | orchestrator | Sunday 05 April 2026 03:04:21 +0000 (0:00:00.354) 0:05:19.195 ********** 2026-04-05 03:05:00.480643 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.480658 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.480673 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:05:00.480687 | orchestrator | 2026-04-05 03:05:00.480703 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-05 03:05:00.480720 | orchestrator | Sunday 05 April 2026 03:04:21 +0000 (0:00:00.608) 0:05:19.803 ********** 2026-04-05 03:05:00.480734 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:05:00.480748 | orchestrator | 2026-04-05 03:05:00.480762 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-05 03:05:00.480777 | orchestrator | Sunday 05 April 2026 03:04:22 +0000 (0:00:00.699) 0:05:20.503 ********** 2026-04-05 03:05:00.480790 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:05:00.480805 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:05:00.480821 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:05:00.480836 | orchestrator | 2026-04-05 03:05:00.480851 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-05 03:05:00.480867 | orchestrator | Sunday 05 April 2026 03:04:23 +0000 (0:00:01.303) 0:05:21.806 ********** 2026-04-05 03:05:00.480882 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:05:00.480898 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:05:00.480914 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:05:00.480928 | orchestrator | 2026-04-05 03:05:00.480944 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-05 03:05:00.480959 | orchestrator | Sunday 05 April 2026 03:04:25 +0000 (0:00:01.711) 0:05:23.518 ********** 2026-04-05 03:05:00.480975 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:05:00.480990 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:05:00.481006 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:05:00.481022 | orchestrator | 2026-04-05 03:05:00.481038 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-05 03:05:00.481054 | orchestrator | Sunday 05 April 2026 03:04:27 +0000 (0:00:01.861) 0:05:25.379 ********** 2026-04-05 03:05:00.481070 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:05:00.481100 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:05:00.481117 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:05:00.481133 | orchestrator | 2026-04-05 03:05:00.481149 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 03:05:00.481166 | orchestrator | Sunday 05 April 2026 03:04:29 +0000 (0:00:01.976) 0:05:27.356 ********** 2026-04-05 03:05:00.481183 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:00.481200 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:05:00.481216 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-05 03:05:00.481234 | orchestrator | 2026-04-05 03:05:00.481246 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-05 03:05:00.481268 | orchestrator | Sunday 05 April 2026 03:04:30 +0000 (0:00:00.730) 0:05:28.086 ********** 2026-04-05 03:05:00.481278 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-05 03:05:00.481288 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-05 03:05:00.481298 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-05 03:05:00.481307 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-05 03:05:00.481338 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-05 03:05:30.060564 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:05:30.060679 | orchestrator | 2026-04-05 03:05:30.060698 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-05 03:05:30.060710 | orchestrator | Sunday 05 April 2026 03:05:00 +0000 (0:00:30.431) 0:05:58.518 ********** 2026-04-05 03:05:30.060722 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:05:30.060733 | orchestrator | 2026-04-05 03:05:30.060744 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-05 03:05:30.060756 | orchestrator | Sunday 05 April 2026 03:05:01 +0000 (0:00:01.395) 0:05:59.913 ********** 2026-04-05 03:05:30.060767 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:30.060779 | orchestrator | 2026-04-05 03:05:30.060790 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-05 03:05:30.060801 | orchestrator | Sunday 05 April 2026 03:05:02 +0000 (0:00:00.379) 0:06:00.293 ********** 2026-04-05 03:05:30.060812 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:30.060823 | orchestrator | 2026-04-05 03:05:30.060834 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-05 03:05:30.060845 | orchestrator | Sunday 05 April 2026 03:05:02 +0000 (0:00:00.152) 0:06:00.446 ********** 2026-04-05 03:05:30.060856 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-05 03:05:30.060866 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-05 03:05:30.060877 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-05 03:05:30.060888 | orchestrator | 2026-04-05 03:05:30.060898 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-05 03:05:30.060909 | orchestrator | Sunday 05 April 2026 03:05:09 +0000 (0:00:06.697) 0:06:07.143 ********** 2026-04-05 03:05:30.060929 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-05 03:05:30.060948 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-05 03:05:30.060967 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-05 03:05:30.060986 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-05 03:05:30.061006 | orchestrator | 2026-04-05 03:05:30.061027 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 03:05:30.061046 | orchestrator | Sunday 05 April 2026 03:05:14 +0000 (0:00:05.388) 0:06:12.532 ********** 2026-04-05 03:05:30.061064 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:05:30.061078 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:05:30.061091 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:05:30.061104 | orchestrator | 2026-04-05 03:05:30.061118 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-05 03:05:30.061132 | orchestrator | Sunday 05 April 2026 03:05:15 +0000 (0:00:00.740) 0:06:13.272 ********** 2026-04-05 03:05:30.061146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:05:30.061160 | orchestrator | 2026-04-05 03:05:30.061201 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-05 03:05:30.061216 | orchestrator | Sunday 05 April 2026 03:05:15 +0000 (0:00:00.645) 0:06:13.917 ********** 2026-04-05 03:05:30.061229 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:05:30.061242 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:05:30.061255 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:30.061268 | orchestrator | 2026-04-05 03:05:30.061281 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-05 03:05:30.061295 | orchestrator | Sunday 05 April 2026 03:05:16 +0000 (0:00:00.668) 0:06:14.586 ********** 2026-04-05 03:05:30.061308 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:05:30.061319 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:05:30.061330 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:05:30.061340 | orchestrator | 2026-04-05 03:05:30.061392 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-05 03:05:30.061409 | orchestrator | Sunday 05 April 2026 03:05:17 +0000 (0:00:01.247) 0:06:15.834 ********** 2026-04-05 03:05:30.061420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 03:05:30.061431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 03:05:30.061457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 03:05:30.061468 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:05:30.061479 | orchestrator | 2026-04-05 03:05:30.061490 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-05 03:05:30.061501 | orchestrator | Sunday 05 April 2026 03:05:18 +0000 (0:00:00.732) 0:06:16.566 ********** 2026-04-05 03:05:30.061512 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:05:30.061523 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:05:30.061534 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:05:30.061545 | orchestrator | 2026-04-05 03:05:30.061555 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-05 03:05:30.061566 | orchestrator | 2026-04-05 03:05:30.061578 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 03:05:30.061589 | orchestrator | Sunday 05 April 2026 03:05:19 +0000 (0:00:00.907) 0:06:17.473 ********** 2026-04-05 03:05:30.061601 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:05:30.061613 | orchestrator | 2026-04-05 03:05:30.061624 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 03:05:30.061635 | orchestrator | Sunday 05 April 2026 03:05:20 +0000 (0:00:00.607) 0:06:18.081 ********** 2026-04-05 03:05:30.061646 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:05:30.061657 | orchestrator | 2026-04-05 03:05:30.061667 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 03:05:30.061698 | orchestrator | Sunday 05 April 2026 03:05:20 +0000 (0:00:00.852) 0:06:18.933 ********** 2026-04-05 03:05:30.061710 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:05:30.061721 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:05:30.061732 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:05:30.061743 | orchestrator | 2026-04-05 03:05:30.061754 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 03:05:30.061765 | orchestrator | Sunday 05 April 2026 03:05:21 +0000 (0:00:00.379) 0:06:19.312 ********** 2026-04-05 03:05:30.061775 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.061786 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.061797 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.061808 | orchestrator | 2026-04-05 03:05:30.061818 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 03:05:30.061829 | orchestrator | Sunday 05 April 2026 03:05:21 +0000 (0:00:00.722) 0:06:20.034 ********** 2026-04-05 03:05:30.061840 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.061851 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.061871 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.061882 | orchestrator | 2026-04-05 03:05:30.061893 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 03:05:30.061903 | orchestrator | Sunday 05 April 2026 03:05:22 +0000 (0:00:00.808) 0:06:20.843 ********** 2026-04-05 03:05:30.061914 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.061925 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.061936 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.061946 | orchestrator | 2026-04-05 03:05:30.061957 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 03:05:30.061968 | orchestrator | Sunday 05 April 2026 03:05:24 +0000 (0:00:01.387) 0:06:22.230 ********** 2026-04-05 03:05:30.061979 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:05:30.061990 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:05:30.062001 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:05:30.062011 | orchestrator | 2026-04-05 03:05:30.062092 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 03:05:30.062104 | orchestrator | Sunday 05 April 2026 03:05:24 +0000 (0:00:00.361) 0:06:22.592 ********** 2026-04-05 03:05:30.062115 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:05:30.062126 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:05:30.062137 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:05:30.062151 | orchestrator | 2026-04-05 03:05:30.062170 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 03:05:30.062182 | orchestrator | Sunday 05 April 2026 03:05:24 +0000 (0:00:00.336) 0:06:22.929 ********** 2026-04-05 03:05:30.062193 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:05:30.062203 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:05:30.062214 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:05:30.062225 | orchestrator | 2026-04-05 03:05:30.062236 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 03:05:30.062247 | orchestrator | Sunday 05 April 2026 03:05:25 +0000 (0:00:00.329) 0:06:23.258 ********** 2026-04-05 03:05:30.062258 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.062269 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.062280 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.062291 | orchestrator | 2026-04-05 03:05:30.062302 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 03:05:30.062312 | orchestrator | Sunday 05 April 2026 03:05:26 +0000 (0:00:01.136) 0:06:24.395 ********** 2026-04-05 03:05:30.062323 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.062334 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.062368 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.062388 | orchestrator | 2026-04-05 03:05:30.062400 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 03:05:30.062411 | orchestrator | Sunday 05 April 2026 03:05:27 +0000 (0:00:00.769) 0:06:25.165 ********** 2026-04-05 03:05:30.062422 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:05:30.062433 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:05:30.062443 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:05:30.062454 | orchestrator | 2026-04-05 03:05:30.062465 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 03:05:30.062476 | orchestrator | Sunday 05 April 2026 03:05:27 +0000 (0:00:00.348) 0:06:25.513 ********** 2026-04-05 03:05:30.062486 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:05:30.062497 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:05:30.062508 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:05:30.062519 | orchestrator | 2026-04-05 03:05:30.062530 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 03:05:30.062547 | orchestrator | Sunday 05 April 2026 03:05:27 +0000 (0:00:00.354) 0:06:25.867 ********** 2026-04-05 03:05:30.062558 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.062569 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.062580 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.062590 | orchestrator | 2026-04-05 03:05:30.062609 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 03:05:30.062620 | orchestrator | Sunday 05 April 2026 03:05:28 +0000 (0:00:00.668) 0:06:26.536 ********** 2026-04-05 03:05:30.062631 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.062642 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.062653 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.062664 | orchestrator | 2026-04-05 03:05:30.062675 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 03:05:30.062686 | orchestrator | Sunday 05 April 2026 03:05:28 +0000 (0:00:00.387) 0:06:26.923 ********** 2026-04-05 03:05:30.062696 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:05:30.062707 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:05:30.062718 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:05:30.062728 | orchestrator | 2026-04-05 03:05:30.062739 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 03:05:30.062750 | orchestrator | Sunday 05 April 2026 03:05:29 +0000 (0:00:00.366) 0:06:27.289 ********** 2026-04-05 03:05:30.062761 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:05:30.062772 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:05:30.062783 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:05:30.062793 | orchestrator | 2026-04-05 03:05:30.062804 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 03:05:30.062815 | orchestrator | Sunday 05 April 2026 03:05:29 +0000 (0:00:00.317) 0:06:27.607 ********** 2026-04-05 03:05:30.062834 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:06:26.248960 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:06:26.249060 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:06:26.249075 | orchestrator | 2026-04-05 03:06:26.249086 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 03:06:26.249098 | orchestrator | Sunday 05 April 2026 03:05:30 +0000 (0:00:00.646) 0:06:28.254 ********** 2026-04-05 03:06:26.249108 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:06:26.249118 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:06:26.249128 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:06:26.249137 | orchestrator | 2026-04-05 03:06:26.249147 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 03:06:26.249157 | orchestrator | Sunday 05 April 2026 03:05:30 +0000 (0:00:00.348) 0:06:28.602 ********** 2026-04-05 03:06:26.249167 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:06:26.249177 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:06:26.249187 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:06:26.249196 | orchestrator | 2026-04-05 03:06:26.249206 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 03:06:26.249216 | orchestrator | Sunday 05 April 2026 03:05:30 +0000 (0:00:00.378) 0:06:28.981 ********** 2026-04-05 03:06:26.249225 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:06:26.249235 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:06:26.249244 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:06:26.249254 | orchestrator | 2026-04-05 03:06:26.249263 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-05 03:06:26.249273 | orchestrator | Sunday 05 April 2026 03:05:31 +0000 (0:00:00.878) 0:06:29.859 ********** 2026-04-05 03:06:26.249282 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:06:26.249292 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:06:26.249301 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:06:26.249311 | orchestrator | 2026-04-05 03:06:26.249321 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-05 03:06:26.249330 | orchestrator | Sunday 05 April 2026 03:05:32 +0000 (0:00:00.365) 0:06:30.224 ********** 2026-04-05 03:06:26.249384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 03:06:26.249397 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 03:06:26.249407 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 03:06:26.249442 | orchestrator | 2026-04-05 03:06:26.249452 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-05 03:06:26.249462 | orchestrator | Sunday 05 April 2026 03:05:32 +0000 (0:00:00.678) 0:06:30.903 ********** 2026-04-05 03:06:26.249472 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:06:26.249482 | orchestrator | 2026-04-05 03:06:26.249494 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-05 03:06:26.249505 | orchestrator | Sunday 05 April 2026 03:05:33 +0000 (0:00:00.801) 0:06:31.704 ********** 2026-04-05 03:06:26.249518 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:06:26.249529 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:06:26.249540 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:06:26.249551 | orchestrator | 2026-04-05 03:06:26.249562 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-05 03:06:26.249574 | orchestrator | Sunday 05 April 2026 03:05:34 +0000 (0:00:00.370) 0:06:32.075 ********** 2026-04-05 03:06:26.249585 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:06:26.249596 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:06:26.249608 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:06:26.249618 | orchestrator | 2026-04-05 03:06:26.249630 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-05 03:06:26.249642 | orchestrator | Sunday 05 April 2026 03:05:34 +0000 (0:00:00.323) 0:06:32.398 ********** 2026-04-05 03:06:26.249654 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:06:26.249665 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:06:26.249676 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:06:26.249688 | orchestrator | 2026-04-05 03:06:26.249700 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-05 03:06:26.249711 | orchestrator | Sunday 05 April 2026 03:05:35 +0000 (0:00:00.704) 0:06:33.103 ********** 2026-04-05 03:06:26.249723 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:06:26.249734 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:06:26.249745 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:06:26.249757 | orchestrator | 2026-04-05 03:06:26.249782 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-05 03:06:26.249793 | orchestrator | Sunday 05 April 2026 03:05:35 +0000 (0:00:00.666) 0:06:33.770 ********** 2026-04-05 03:06:26.249805 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 03:06:26.249819 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 03:06:26.249830 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 03:06:26.249842 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 03:06:26.249854 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 03:06:26.249865 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 03:06:26.249877 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 03:06:26.249886 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 03:06:26.249896 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 03:06:26.249905 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 03:06:26.249932 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 03:06:26.249942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 03:06:26.249952 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 03:06:26.249961 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 03:06:26.249977 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 03:06:26.249987 | orchestrator | 2026-04-05 03:06:26.249997 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-05 03:06:26.250006 | orchestrator | Sunday 05 April 2026 03:05:38 +0000 (0:00:02.324) 0:06:36.095 ********** 2026-04-05 03:06:26.250072 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:06:26.250083 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:06:26.250093 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:06:26.250102 | orchestrator | 2026-04-05 03:06:26.250112 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-05 03:06:26.250131 | orchestrator | Sunday 05 April 2026 03:05:38 +0000 (0:00:00.328) 0:06:36.423 ********** 2026-04-05 03:06:26.250141 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:06:26.250151 | orchestrator | 2026-04-05 03:06:26.250161 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-05 03:06:26.250170 | orchestrator | Sunday 05 April 2026 03:05:39 +0000 (0:00:00.839) 0:06:37.263 ********** 2026-04-05 03:06:26.250180 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 03:06:26.250190 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 03:06:26.250200 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 03:06:26.250209 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-05 03:06:26.250219 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-05 03:06:26.250229 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-05 03:06:26.250239 | orchestrator | 2026-04-05 03:06:26.250248 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-05 03:06:26.250258 | orchestrator | Sunday 05 April 2026 03:05:40 +0000 (0:00:01.096) 0:06:38.359 ********** 2026-04-05 03:06:26.250267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:06:26.250277 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 03:06:26.250287 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 03:06:26.250299 | orchestrator | 2026-04-05 03:06:26.250315 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-05 03:06:26.250330 | orchestrator | Sunday 05 April 2026 03:05:42 +0000 (0:00:02.318) 0:06:40.678 ********** 2026-04-05 03:06:26.250417 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 03:06:26.250428 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 03:06:26.250438 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:06:26.250448 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 03:06:26.250458 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 03:06:26.250467 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:06:26.250477 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 03:06:26.250486 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 03:06:26.250496 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:06:26.250506 | orchestrator | 2026-04-05 03:06:26.250515 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-05 03:06:26.250525 | orchestrator | Sunday 05 April 2026 03:05:43 +0000 (0:00:01.223) 0:06:41.901 ********** 2026-04-05 03:06:26.250534 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:06:26.250544 | orchestrator | 2026-04-05 03:06:26.250554 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-05 03:06:26.250563 | orchestrator | Sunday 05 April 2026 03:05:46 +0000 (0:00:02.264) 0:06:44.166 ********** 2026-04-05 03:06:26.250580 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:06:26.250591 | orchestrator | 2026-04-05 03:06:26.250610 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-05 03:06:26.250620 | orchestrator | Sunday 05 April 2026 03:05:46 +0000 (0:00:00.889) 0:06:45.055 ********** 2026-04-05 03:06:26.250644 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}) 2026-04-05 03:06:26.250666 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}) 2026-04-05 03:06:26.250676 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}) 2026-04-05 03:06:26.250686 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}) 2026-04-05 03:06:26.250696 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}) 2026-04-05 03:06:26.250715 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}) 2026-04-05 03:07:06.674091 | orchestrator | 2026-04-05 03:07:06.674219 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-05 03:07:06.674244 | orchestrator | Sunday 05 April 2026 03:06:26 +0000 (0:00:39.218) 0:07:24.273 ********** 2026-04-05 03:07:06.674262 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.674280 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.674297 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:06.674312 | orchestrator | 2026-04-05 03:07:06.674329 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-05 03:07:06.674376 | orchestrator | Sunday 05 April 2026 03:06:26 +0000 (0:00:00.390) 0:07:24.664 ********** 2026-04-05 03:07:06.674394 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:07:06.674410 | orchestrator | 2026-04-05 03:07:06.674427 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-05 03:07:06.674443 | orchestrator | Sunday 05 April 2026 03:06:27 +0000 (0:00:00.892) 0:07:25.556 ********** 2026-04-05 03:07:06.674460 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:06.674477 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:06.674493 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:06.674509 | orchestrator | 2026-04-05 03:07:06.674525 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-05 03:07:06.674540 | orchestrator | Sunday 05 April 2026 03:06:28 +0000 (0:00:00.707) 0:07:26.263 ********** 2026-04-05 03:07:06.674560 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:06.674578 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:06.674594 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:06.674611 | orchestrator | 2026-04-05 03:07:06.674628 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-05 03:07:06.674644 | orchestrator | Sunday 05 April 2026 03:06:30 +0000 (0:00:02.659) 0:07:28.923 ********** 2026-04-05 03:07:06.674660 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:07:06.674676 | orchestrator | 2026-04-05 03:07:06.674692 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-05 03:07:06.674708 | orchestrator | Sunday 05 April 2026 03:06:31 +0000 (0:00:00.847) 0:07:29.770 ********** 2026-04-05 03:07:06.674725 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:07:06.674742 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:07:06.674757 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:07:06.674772 | orchestrator | 2026-04-05 03:07:06.674788 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-05 03:07:06.674806 | orchestrator | Sunday 05 April 2026 03:06:33 +0000 (0:00:01.316) 0:07:31.087 ********** 2026-04-05 03:07:06.674851 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:07:06.674870 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:07:06.674883 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:07:06.674893 | orchestrator | 2026-04-05 03:07:06.674902 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-05 03:07:06.674912 | orchestrator | Sunday 05 April 2026 03:06:34 +0000 (0:00:01.241) 0:07:32.329 ********** 2026-04-05 03:07:06.674922 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:07:06.674931 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:07:06.674941 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:07:06.674950 | orchestrator | 2026-04-05 03:07:06.674960 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-05 03:07:06.674969 | orchestrator | Sunday 05 April 2026 03:06:36 +0000 (0:00:02.324) 0:07:34.653 ********** 2026-04-05 03:07:06.674979 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.674988 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.674998 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:06.675007 | orchestrator | 2026-04-05 03:07:06.675017 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-05 03:07:06.675027 | orchestrator | Sunday 05 April 2026 03:06:36 +0000 (0:00:00.377) 0:07:35.030 ********** 2026-04-05 03:07:06.675036 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675046 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.675055 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:06.675065 | orchestrator | 2026-04-05 03:07:06.675074 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-05 03:07:06.675084 | orchestrator | Sunday 05 April 2026 03:06:37 +0000 (0:00:00.373) 0:07:35.404 ********** 2026-04-05 03:07:06.675094 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-05 03:07:06.675118 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 03:07:06.675128 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-05 03:07:06.675137 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-05 03:07:06.675147 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-05 03:07:06.675156 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-05 03:07:06.675166 | orchestrator | 2026-04-05 03:07:06.675175 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-05 03:07:06.675185 | orchestrator | Sunday 05 April 2026 03:06:38 +0000 (0:00:01.044) 0:07:36.449 ********** 2026-04-05 03:07:06.675195 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 03:07:06.675204 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-05 03:07:06.675214 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 03:07:06.675224 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-05 03:07:06.675233 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 03:07:06.675243 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 03:07:06.675253 | orchestrator | 2026-04-05 03:07:06.675262 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-05 03:07:06.675272 | orchestrator | Sunday 05 April 2026 03:06:40 +0000 (0:00:02.573) 0:07:39.023 ********** 2026-04-05 03:07:06.675282 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 03:07:06.675291 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-05 03:07:06.675301 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 03:07:06.675310 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-05 03:07:06.675320 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 03:07:06.675383 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 03:07:06.675394 | orchestrator | 2026-04-05 03:07:06.675425 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-05 03:07:06.675435 | orchestrator | Sunday 05 April 2026 03:06:44 +0000 (0:00:03.784) 0:07:42.807 ********** 2026-04-05 03:07:06.675445 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675454 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.675473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:07:06.675483 | orchestrator | 2026-04-05 03:07:06.675493 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-05 03:07:06.675503 | orchestrator | Sunday 05 April 2026 03:06:47 +0000 (0:00:03.052) 0:07:45.859 ********** 2026-04-05 03:07:06.675512 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675522 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.675532 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-05 03:07:06.675542 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:07:06.675551 | orchestrator | 2026-04-05 03:07:06.675561 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-05 03:07:06.675571 | orchestrator | Sunday 05 April 2026 03:07:00 +0000 (0:00:12.795) 0:07:58.654 ********** 2026-04-05 03:07:06.675580 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675590 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.675599 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:06.675609 | orchestrator | 2026-04-05 03:07:06.675619 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 03:07:06.675628 | orchestrator | Sunday 05 April 2026 03:07:01 +0000 (0:00:01.381) 0:08:00.035 ********** 2026-04-05 03:07:06.675638 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675648 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.675657 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:06.675667 | orchestrator | 2026-04-05 03:07:06.675677 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-05 03:07:06.675686 | orchestrator | Sunday 05 April 2026 03:07:02 +0000 (0:00:00.395) 0:08:00.431 ********** 2026-04-05 03:07:06.675696 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:07:06.675706 | orchestrator | 2026-04-05 03:07:06.675715 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-05 03:07:06.675725 | orchestrator | Sunday 05 April 2026 03:07:03 +0000 (0:00:00.934) 0:08:01.366 ********** 2026-04-05 03:07:06.675735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:07:06.675745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:07:06.675755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:07:06.675764 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675774 | orchestrator | 2026-04-05 03:07:06.675783 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-05 03:07:06.675793 | orchestrator | Sunday 05 April 2026 03:07:03 +0000 (0:00:00.431) 0:08:01.797 ********** 2026-04-05 03:07:06.675803 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675812 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.675822 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:06.675831 | orchestrator | 2026-04-05 03:07:06.675841 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-05 03:07:06.675850 | orchestrator | Sunday 05 April 2026 03:07:04 +0000 (0:00:00.379) 0:08:02.176 ********** 2026-04-05 03:07:06.675860 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675869 | orchestrator | 2026-04-05 03:07:06.675879 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-05 03:07:06.675888 | orchestrator | Sunday 05 April 2026 03:07:04 +0000 (0:00:00.248) 0:08:02.424 ********** 2026-04-05 03:07:06.675898 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675907 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:06.675917 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:06.675926 | orchestrator | 2026-04-05 03:07:06.675936 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-05 03:07:06.675945 | orchestrator | Sunday 05 April 2026 03:07:04 +0000 (0:00:00.612) 0:08:03.037 ********** 2026-04-05 03:07:06.675962 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.675972 | orchestrator | 2026-04-05 03:07:06.675987 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-05 03:07:06.675997 | orchestrator | Sunday 05 April 2026 03:07:05 +0000 (0:00:00.259) 0:08:03.296 ********** 2026-04-05 03:07:06.676006 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.676016 | orchestrator | 2026-04-05 03:07:06.676025 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-05 03:07:06.676035 | orchestrator | Sunday 05 April 2026 03:07:05 +0000 (0:00:00.324) 0:08:03.621 ********** 2026-04-05 03:07:06.676045 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.676054 | orchestrator | 2026-04-05 03:07:06.676064 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-05 03:07:06.676073 | orchestrator | Sunday 05 April 2026 03:07:05 +0000 (0:00:00.140) 0:08:03.761 ********** 2026-04-05 03:07:06.676083 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.676092 | orchestrator | 2026-04-05 03:07:06.676102 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-05 03:07:06.676111 | orchestrator | Sunday 05 April 2026 03:07:05 +0000 (0:00:00.275) 0:08:04.036 ********** 2026-04-05 03:07:06.676121 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.676130 | orchestrator | 2026-04-05 03:07:06.676140 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-05 03:07:06.676150 | orchestrator | Sunday 05 April 2026 03:07:06 +0000 (0:00:00.264) 0:08:04.300 ********** 2026-04-05 03:07:06.676159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:07:06.676169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:07:06.676179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:07:06.676189 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:06.676199 | orchestrator | 2026-04-05 03:07:06.676214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-05 03:07:28.217889 | orchestrator | Sunday 05 April 2026 03:07:06 +0000 (0:00:00.414) 0:08:04.715 ********** 2026-04-05 03:07:28.217998 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.218071 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.218082 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.218090 | orchestrator | 2026-04-05 03:07:28.218098 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-05 03:07:28.218106 | orchestrator | Sunday 05 April 2026 03:07:06 +0000 (0:00:00.338) 0:08:05.054 ********** 2026-04-05 03:07:28.218113 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.218121 | orchestrator | 2026-04-05 03:07:28.218128 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-05 03:07:28.218136 | orchestrator | Sunday 05 April 2026 03:07:07 +0000 (0:00:00.254) 0:08:05.309 ********** 2026-04-05 03:07:28.218143 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.218150 | orchestrator | 2026-04-05 03:07:28.218158 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-05 03:07:28.218165 | orchestrator | 2026-04-05 03:07:28.218172 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 03:07:28.218180 | orchestrator | Sunday 05 April 2026 03:07:08 +0000 (0:00:01.353) 0:08:06.663 ********** 2026-04-05 03:07:28.218188 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:07:28.218197 | orchestrator | 2026-04-05 03:07:28.218204 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 03:07:28.218212 | orchestrator | Sunday 05 April 2026 03:07:09 +0000 (0:00:01.307) 0:08:07.970 ********** 2026-04-05 03:07:28.218219 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:07:28.218247 | orchestrator | 2026-04-05 03:07:28.218256 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 03:07:28.218263 | orchestrator | Sunday 05 April 2026 03:07:11 +0000 (0:00:01.447) 0:08:09.417 ********** 2026-04-05 03:07:28.218270 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.218278 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.218285 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.218292 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:07:28.218300 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:07:28.218307 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:07:28.218315 | orchestrator | 2026-04-05 03:07:28.218322 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 03:07:28.218451 | orchestrator | Sunday 05 April 2026 03:07:12 +0000 (0:00:01.596) 0:08:11.014 ********** 2026-04-05 03:07:28.218482 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.218492 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.218500 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.218509 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.218518 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.218526 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.218535 | orchestrator | 2026-04-05 03:07:28.218545 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 03:07:28.218554 | orchestrator | Sunday 05 April 2026 03:07:13 +0000 (0:00:00.779) 0:08:11.793 ********** 2026-04-05 03:07:28.218563 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.218572 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.218580 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.218587 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.218595 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.218602 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.218609 | orchestrator | 2026-04-05 03:07:28.218616 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 03:07:28.218624 | orchestrator | Sunday 05 April 2026 03:07:14 +0000 (0:00:00.965) 0:08:12.758 ********** 2026-04-05 03:07:28.218631 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.218638 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.218645 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.218653 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.218660 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.218667 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.218674 | orchestrator | 2026-04-05 03:07:28.218694 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 03:07:28.218702 | orchestrator | Sunday 05 April 2026 03:07:15 +0000 (0:00:00.755) 0:08:13.514 ********** 2026-04-05 03:07:28.218709 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.218717 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.218724 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.218731 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:07:28.218738 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:07:28.218746 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:07:28.218753 | orchestrator | 2026-04-05 03:07:28.218760 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 03:07:28.218767 | orchestrator | Sunday 05 April 2026 03:07:16 +0000 (0:00:01.440) 0:08:14.954 ********** 2026-04-05 03:07:28.218774 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.218782 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.218789 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.218796 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.218804 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.218811 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.218818 | orchestrator | 2026-04-05 03:07:28.218826 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 03:07:28.218833 | orchestrator | Sunday 05 April 2026 03:07:17 +0000 (0:00:00.649) 0:08:15.604 ********** 2026-04-05 03:07:28.218840 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.218857 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.218864 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.218872 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.218879 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.218886 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.218893 | orchestrator | 2026-04-05 03:07:28.218918 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 03:07:28.218926 | orchestrator | Sunday 05 April 2026 03:07:18 +0000 (0:00:00.886) 0:08:16.491 ********** 2026-04-05 03:07:28.218933 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.218940 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.218948 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.218955 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:07:28.218962 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:07:28.218969 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:07:28.218976 | orchestrator | 2026-04-05 03:07:28.218983 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 03:07:28.218991 | orchestrator | Sunday 05 April 2026 03:07:19 +0000 (0:00:01.093) 0:08:17.584 ********** 2026-04-05 03:07:28.218998 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.219005 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.219012 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.219019 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:07:28.219026 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:07:28.219033 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:07:28.219040 | orchestrator | 2026-04-05 03:07:28.219048 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 03:07:28.219055 | orchestrator | Sunday 05 April 2026 03:07:20 +0000 (0:00:01.436) 0:08:19.021 ********** 2026-04-05 03:07:28.219062 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.219070 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.219077 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.219084 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.219091 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.219099 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.219106 | orchestrator | 2026-04-05 03:07:28.219113 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 03:07:28.219121 | orchestrator | Sunday 05 April 2026 03:07:21 +0000 (0:00:00.695) 0:08:19.716 ********** 2026-04-05 03:07:28.219128 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.219135 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.219142 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.219149 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:07:28.219157 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:07:28.219164 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:07:28.219171 | orchestrator | 2026-04-05 03:07:28.219178 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 03:07:28.219185 | orchestrator | Sunday 05 April 2026 03:07:22 +0000 (0:00:00.931) 0:08:20.648 ********** 2026-04-05 03:07:28.219193 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.219200 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.219207 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.219214 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.219221 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.219229 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.219236 | orchestrator | 2026-04-05 03:07:28.219243 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 03:07:28.219250 | orchestrator | Sunday 05 April 2026 03:07:23 +0000 (0:00:00.728) 0:08:21.376 ********** 2026-04-05 03:07:28.219257 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.219265 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.219272 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.219279 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.219286 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.219299 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.219306 | orchestrator | 2026-04-05 03:07:28.219313 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 03:07:28.219320 | orchestrator | Sunday 05 April 2026 03:07:24 +0000 (0:00:00.940) 0:08:22.316 ********** 2026-04-05 03:07:28.219346 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.219360 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.219370 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.219377 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.219384 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.219391 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.219399 | orchestrator | 2026-04-05 03:07:28.219406 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 03:07:28.219413 | orchestrator | Sunday 05 April 2026 03:07:24 +0000 (0:00:00.641) 0:08:22.958 ********** 2026-04-05 03:07:28.219420 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.219427 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.219435 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.219442 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.219449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.219456 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.219463 | orchestrator | 2026-04-05 03:07:28.219471 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 03:07:28.219478 | orchestrator | Sunday 05 April 2026 03:07:25 +0000 (0:00:00.970) 0:08:23.928 ********** 2026-04-05 03:07:28.219486 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.219493 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.219500 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.219507 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:07:28.219514 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:07:28.219521 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:07:28.219529 | orchestrator | 2026-04-05 03:07:28.219536 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 03:07:28.219543 | orchestrator | Sunday 05 April 2026 03:07:26 +0000 (0:00:00.652) 0:08:24.581 ********** 2026-04-05 03:07:28.219551 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:07:28.219558 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:07:28.219565 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:07:28.219572 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:07:28.219579 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:07:28.219586 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:07:28.219593 | orchestrator | 2026-04-05 03:07:28.219600 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 03:07:28.219608 | orchestrator | Sunday 05 April 2026 03:07:27 +0000 (0:00:01.003) 0:08:25.585 ********** 2026-04-05 03:07:28.219615 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:07:28.219622 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:07:28.219629 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:07:28.219636 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:07:28.219643 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:07:28.219650 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:07:28.219657 | orchestrator | 2026-04-05 03:07:28.219670 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 03:08:01.491185 | orchestrator | Sunday 05 April 2026 03:07:28 +0000 (0:00:00.674) 0:08:26.259 ********** 2026-04-05 03:08:01.491287 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.491299 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.491309 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.491317 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:08:01.491425 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:08:01.491435 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:08:01.491444 | orchestrator | 2026-04-05 03:08:01.491454 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-05 03:08:01.491463 | orchestrator | Sunday 05 April 2026 03:07:29 +0000 (0:00:01.496) 0:08:27.756 ********** 2026-04-05 03:08:01.491497 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:08:01.491505 | orchestrator | 2026-04-05 03:08:01.491513 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-05 03:08:01.491520 | orchestrator | Sunday 05 April 2026 03:07:33 +0000 (0:00:04.183) 0:08:31.939 ********** 2026-04-05 03:08:01.491528 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:08:01.491537 | orchestrator | 2026-04-05 03:08:01.491545 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-05 03:08:01.491554 | orchestrator | Sunday 05 April 2026 03:07:36 +0000 (0:00:02.848) 0:08:34.788 ********** 2026-04-05 03:08:01.491562 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:01.491571 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:01.491579 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:01.491587 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:08:01.491595 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:08:01.491604 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:08:01.491612 | orchestrator | 2026-04-05 03:08:01.491620 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-05 03:08:01.491629 | orchestrator | Sunday 05 April 2026 03:07:38 +0000 (0:00:01.603) 0:08:36.391 ********** 2026-04-05 03:08:01.491637 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:01.491645 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:01.491653 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:01.491662 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:08:01.491670 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:08:01.491678 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:08:01.491687 | orchestrator | 2026-04-05 03:08:01.491695 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-05 03:08:01.491704 | orchestrator | Sunday 05 April 2026 03:07:39 +0000 (0:00:01.266) 0:08:37.658 ********** 2026-04-05 03:08:01.491716 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:08:01.491727 | orchestrator | 2026-04-05 03:08:01.491737 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-05 03:08:01.491747 | orchestrator | Sunday 05 April 2026 03:07:40 +0000 (0:00:01.379) 0:08:39.038 ********** 2026-04-05 03:08:01.491756 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:01.491766 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:01.491775 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:01.491785 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:08:01.491794 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:08:01.491804 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:08:01.491814 | orchestrator | 2026-04-05 03:08:01.491823 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-05 03:08:01.491831 | orchestrator | Sunday 05 April 2026 03:07:42 +0000 (0:00:01.658) 0:08:40.696 ********** 2026-04-05 03:08:01.491840 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:01.491848 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:01.491855 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:08:01.491862 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:08:01.491870 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:01.491877 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:08:01.491885 | orchestrator | 2026-04-05 03:08:01.491893 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-05 03:08:01.491902 | orchestrator | Sunday 05 April 2026 03:07:46 +0000 (0:00:03.997) 0:08:44.693 ********** 2026-04-05 03:08:01.491917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:08:01.491926 | orchestrator | 2026-04-05 03:08:01.491935 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-05 03:08:01.491943 | orchestrator | Sunday 05 April 2026 03:07:47 +0000 (0:00:01.361) 0:08:46.055 ********** 2026-04-05 03:08:01.491959 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.491968 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.491976 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.491985 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:08:01.491993 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:08:01.492001 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:08:01.492010 | orchestrator | 2026-04-05 03:08:01.492018 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-05 03:08:01.492027 | orchestrator | Sunday 05 April 2026 03:07:48 +0000 (0:00:00.693) 0:08:46.749 ********** 2026-04-05 03:08:01.492035 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:01.492044 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:01.492052 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:01.492061 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:08:01.492069 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:08:01.492078 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:08:01.492086 | orchestrator | 2026-04-05 03:08:01.492095 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-05 03:08:01.492103 | orchestrator | Sunday 05 April 2026 03:07:51 +0000 (0:00:02.578) 0:08:49.328 ********** 2026-04-05 03:08:01.492112 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492120 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492128 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492137 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:08:01.492145 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:08:01.492153 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:08:01.492162 | orchestrator | 2026-04-05 03:08:01.492190 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-05 03:08:01.492199 | orchestrator | 2026-04-05 03:08:01.492207 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 03:08:01.492216 | orchestrator | Sunday 05 April 2026 03:07:52 +0000 (0:00:00.917) 0:08:50.245 ********** 2026-04-05 03:08:01.492225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:08:01.492234 | orchestrator | 2026-04-05 03:08:01.492242 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 03:08:01.492250 | orchestrator | Sunday 05 April 2026 03:07:53 +0000 (0:00:00.858) 0:08:51.104 ********** 2026-04-05 03:08:01.492259 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:08:01.492268 | orchestrator | 2026-04-05 03:08:01.492276 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 03:08:01.492285 | orchestrator | Sunday 05 April 2026 03:07:53 +0000 (0:00:00.811) 0:08:51.915 ********** 2026-04-05 03:08:01.492293 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:01.492302 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:01.492310 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:01.492319 | orchestrator | 2026-04-05 03:08:01.492344 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 03:08:01.492353 | orchestrator | Sunday 05 April 2026 03:07:54 +0000 (0:00:00.352) 0:08:52.267 ********** 2026-04-05 03:08:01.492362 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492370 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492379 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492387 | orchestrator | 2026-04-05 03:08:01.492396 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 03:08:01.492405 | orchestrator | Sunday 05 April 2026 03:07:54 +0000 (0:00:00.739) 0:08:53.007 ********** 2026-04-05 03:08:01.492413 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492422 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492430 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492439 | orchestrator | 2026-04-05 03:08:01.492448 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 03:08:01.492462 | orchestrator | Sunday 05 April 2026 03:07:55 +0000 (0:00:00.763) 0:08:53.771 ********** 2026-04-05 03:08:01.492470 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492479 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492488 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492496 | orchestrator | 2026-04-05 03:08:01.492505 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 03:08:01.492513 | orchestrator | Sunday 05 April 2026 03:07:56 +0000 (0:00:01.040) 0:08:54.811 ********** 2026-04-05 03:08:01.492522 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:01.492531 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:01.492539 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:01.492548 | orchestrator | 2026-04-05 03:08:01.492556 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 03:08:01.492565 | orchestrator | Sunday 05 April 2026 03:07:57 +0000 (0:00:00.343) 0:08:55.155 ********** 2026-04-05 03:08:01.492573 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:01.492582 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:01.492590 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:01.492599 | orchestrator | 2026-04-05 03:08:01.492607 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 03:08:01.492616 | orchestrator | Sunday 05 April 2026 03:07:57 +0000 (0:00:00.344) 0:08:55.499 ********** 2026-04-05 03:08:01.492625 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:01.492634 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:01.492642 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:01.492651 | orchestrator | 2026-04-05 03:08:01.492659 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 03:08:01.492668 | orchestrator | Sunday 05 April 2026 03:07:57 +0000 (0:00:00.316) 0:08:55.816 ********** 2026-04-05 03:08:01.492677 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492685 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492694 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492703 | orchestrator | 2026-04-05 03:08:01.492711 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 03:08:01.492724 | orchestrator | Sunday 05 April 2026 03:07:58 +0000 (0:00:01.105) 0:08:56.921 ********** 2026-04-05 03:08:01.492733 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492742 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492750 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492759 | orchestrator | 2026-04-05 03:08:01.492767 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 03:08:01.492776 | orchestrator | Sunday 05 April 2026 03:07:59 +0000 (0:00:00.819) 0:08:57.740 ********** 2026-04-05 03:08:01.492784 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:01.492792 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:01.492801 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:01.492810 | orchestrator | 2026-04-05 03:08:01.492818 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 03:08:01.492827 | orchestrator | Sunday 05 April 2026 03:08:00 +0000 (0:00:00.338) 0:08:58.079 ********** 2026-04-05 03:08:01.492835 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:01.492844 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:01.492852 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:01.492859 | orchestrator | 2026-04-05 03:08:01.492866 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 03:08:01.492874 | orchestrator | Sunday 05 April 2026 03:08:00 +0000 (0:00:00.375) 0:08:58.454 ********** 2026-04-05 03:08:01.492882 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492891 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492900 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492908 | orchestrator | 2026-04-05 03:08:01.492916 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 03:08:01.492925 | orchestrator | Sunday 05 April 2026 03:08:01 +0000 (0:00:00.690) 0:08:59.145 ********** 2026-04-05 03:08:01.492940 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:01.492949 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:01.492957 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:01.492966 | orchestrator | 2026-04-05 03:08:01.492980 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 03:08:39.286648 | orchestrator | Sunday 05 April 2026 03:08:01 +0000 (0:00:00.389) 0:08:59.535 ********** 2026-04-05 03:08:39.286758 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:39.286777 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:39.286794 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:39.286812 | orchestrator | 2026-04-05 03:08:39.286829 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 03:08:39.286845 | orchestrator | Sunday 05 April 2026 03:08:01 +0000 (0:00:00.370) 0:08:59.905 ********** 2026-04-05 03:08:39.286862 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:39.286879 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:39.286896 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:39.286911 | orchestrator | 2026-04-05 03:08:39.286927 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 03:08:39.286944 | orchestrator | Sunday 05 April 2026 03:08:02 +0000 (0:00:00.327) 0:09:00.233 ********** 2026-04-05 03:08:39.286962 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:39.286978 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:39.286996 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:39.287011 | orchestrator | 2026-04-05 03:08:39.287029 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 03:08:39.287047 | orchestrator | Sunday 05 April 2026 03:08:02 +0000 (0:00:00.656) 0:09:00.889 ********** 2026-04-05 03:08:39.287063 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:39.287073 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:39.287083 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:39.287092 | orchestrator | 2026-04-05 03:08:39.287102 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 03:08:39.287112 | orchestrator | Sunday 05 April 2026 03:08:03 +0000 (0:00:00.413) 0:09:01.302 ********** 2026-04-05 03:08:39.287122 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:39.287131 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:39.287141 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:39.287150 | orchestrator | 2026-04-05 03:08:39.287160 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 03:08:39.287170 | orchestrator | Sunday 05 April 2026 03:08:03 +0000 (0:00:00.360) 0:09:01.663 ********** 2026-04-05 03:08:39.287180 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:39.287192 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:39.287203 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:39.287214 | orchestrator | 2026-04-05 03:08:39.287225 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-05 03:08:39.287236 | orchestrator | Sunday 05 April 2026 03:08:04 +0000 (0:00:00.885) 0:09:02.549 ********** 2026-04-05 03:08:39.287248 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:39.287259 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:39.287271 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-05 03:08:39.287284 | orchestrator | 2026-04-05 03:08:39.287296 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-05 03:08:39.287364 | orchestrator | Sunday 05 April 2026 03:08:04 +0000 (0:00:00.492) 0:09:03.041 ********** 2026-04-05 03:08:39.287378 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:08:39.287390 | orchestrator | 2026-04-05 03:08:39.287401 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-05 03:08:39.287413 | orchestrator | Sunday 05 April 2026 03:08:07 +0000 (0:00:02.207) 0:09:05.249 ********** 2026-04-05 03:08:39.287425 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-05 03:08:39.287465 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:39.287478 | orchestrator | 2026-04-05 03:08:39.287490 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-05 03:08:39.287501 | orchestrator | Sunday 05 April 2026 03:08:07 +0000 (0:00:00.232) 0:09:05.482 ********** 2026-04-05 03:08:39.287531 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 03:08:39.287553 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 03:08:39.287565 | orchestrator | 2026-04-05 03:08:39.287577 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-05 03:08:39.287586 | orchestrator | Sunday 05 April 2026 03:08:16 +0000 (0:00:08.671) 0:09:14.153 ********** 2026-04-05 03:08:39.287596 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:08:39.287606 | orchestrator | 2026-04-05 03:08:39.287615 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-05 03:08:39.287625 | orchestrator | Sunday 05 April 2026 03:08:19 +0000 (0:00:03.694) 0:09:17.847 ********** 2026-04-05 03:08:39.287634 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:08:39.287645 | orchestrator | 2026-04-05 03:08:39.287654 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-05 03:08:39.287664 | orchestrator | Sunday 05 April 2026 03:08:20 +0000 (0:00:00.906) 0:09:18.753 ********** 2026-04-05 03:08:39.287673 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 03:08:39.287683 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 03:08:39.287712 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 03:08:39.287722 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-05 03:08:39.287732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-05 03:08:39.287741 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-05 03:08:39.287751 | orchestrator | 2026-04-05 03:08:39.287760 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-05 03:08:39.287770 | orchestrator | Sunday 05 April 2026 03:08:21 +0000 (0:00:01.084) 0:09:19.838 ********** 2026-04-05 03:08:39.287779 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:08:39.287789 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 03:08:39.287798 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 03:08:39.287808 | orchestrator | 2026-04-05 03:08:39.287817 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-05 03:08:39.287827 | orchestrator | Sunday 05 April 2026 03:08:24 +0000 (0:00:02.307) 0:09:22.145 ********** 2026-04-05 03:08:39.287836 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 03:08:39.287847 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 03:08:39.287856 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:39.287866 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 03:08:39.287875 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 03:08:39.287885 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:39.287894 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 03:08:39.287903 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 03:08:39.287921 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:39.287930 | orchestrator | 2026-04-05 03:08:39.287940 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-05 03:08:39.287949 | orchestrator | Sunday 05 April 2026 03:08:25 +0000 (0:00:01.553) 0:09:23.699 ********** 2026-04-05 03:08:39.287959 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:39.287968 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:39.287978 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:39.287987 | orchestrator | 2026-04-05 03:08:39.287997 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-05 03:08:39.288006 | orchestrator | Sunday 05 April 2026 03:08:28 +0000 (0:00:02.699) 0:09:26.398 ********** 2026-04-05 03:08:39.288016 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:08:39.288025 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:08:39.288034 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:08:39.288044 | orchestrator | 2026-04-05 03:08:39.288053 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-05 03:08:39.288063 | orchestrator | Sunday 05 April 2026 03:08:28 +0000 (0:00:00.366) 0:09:26.765 ********** 2026-04-05 03:08:39.288072 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:08:39.288082 | orchestrator | 2026-04-05 03:08:39.288091 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-05 03:08:39.288101 | orchestrator | Sunday 05 April 2026 03:08:29 +0000 (0:00:00.899) 0:09:27.665 ********** 2026-04-05 03:08:39.288110 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:08:39.288120 | orchestrator | 2026-04-05 03:08:39.288129 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-05 03:08:39.288139 | orchestrator | Sunday 05 April 2026 03:08:30 +0000 (0:00:00.606) 0:09:28.272 ********** 2026-04-05 03:08:39.288148 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:39.288158 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:39.288167 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:39.288177 | orchestrator | 2026-04-05 03:08:39.288186 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-05 03:08:39.288200 | orchestrator | Sunday 05 April 2026 03:08:31 +0000 (0:00:01.346) 0:09:29.618 ********** 2026-04-05 03:08:39.288210 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:39.288219 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:39.288229 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:39.288238 | orchestrator | 2026-04-05 03:08:39.288248 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-05 03:08:39.288257 | orchestrator | Sunday 05 April 2026 03:08:32 +0000 (0:00:01.436) 0:09:31.055 ********** 2026-04-05 03:08:39.288267 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:39.288276 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:39.288286 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:39.288295 | orchestrator | 2026-04-05 03:08:39.288304 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-05 03:08:39.288335 | orchestrator | Sunday 05 April 2026 03:08:34 +0000 (0:00:01.835) 0:09:32.890 ********** 2026-04-05 03:08:39.288345 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:39.288354 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:39.288364 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:39.288373 | orchestrator | 2026-04-05 03:08:39.288383 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-05 03:08:39.288392 | orchestrator | Sunday 05 April 2026 03:08:36 +0000 (0:00:02.091) 0:09:34.982 ********** 2026-04-05 03:08:39.288402 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:08:39.288411 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:08:39.288421 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:08:39.288430 | orchestrator | 2026-04-05 03:08:39.288440 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 03:08:39.288456 | orchestrator | Sunday 05 April 2026 03:08:38 +0000 (0:00:01.641) 0:09:36.623 ********** 2026-04-05 03:08:39.288465 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:08:39.288474 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:08:39.288484 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:08:39.288494 | orchestrator | 2026-04-05 03:08:39.288503 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-05 03:08:39.288519 | orchestrator | Sunday 05 April 2026 03:08:39 +0000 (0:00:00.704) 0:09:37.328 ********** 2026-04-05 03:09:00.482960 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:00.483045 | orchestrator | 2026-04-05 03:09:00.483056 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-05 03:09:00.483066 | orchestrator | Sunday 05 April 2026 03:08:40 +0000 (0:00:00.857) 0:09:38.185 ********** 2026-04-05 03:09:00.483073 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483081 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.483088 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.483095 | orchestrator | 2026-04-05 03:09:00.483102 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-05 03:09:00.483109 | orchestrator | Sunday 05 April 2026 03:08:40 +0000 (0:00:00.380) 0:09:38.565 ********** 2026-04-05 03:09:00.483116 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:09:00.483124 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:09:00.483130 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:09:00.483137 | orchestrator | 2026-04-05 03:09:00.483144 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-05 03:09:00.483151 | orchestrator | Sunday 05 April 2026 03:08:41 +0000 (0:00:01.326) 0:09:39.892 ********** 2026-04-05 03:09:00.483158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:09:00.483166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:09:00.483173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:09:00.483180 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.483186 | orchestrator | 2026-04-05 03:09:00.483193 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-05 03:09:00.483200 | orchestrator | Sunday 05 April 2026 03:08:42 +0000 (0:00:00.966) 0:09:40.859 ********** 2026-04-05 03:09:00.483207 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483214 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.483220 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.483227 | orchestrator | 2026-04-05 03:09:00.483234 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-05 03:09:00.483241 | orchestrator | 2026-04-05 03:09:00.483247 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 03:09:00.483254 | orchestrator | Sunday 05 April 2026 03:08:43 +0000 (0:00:00.925) 0:09:41.785 ********** 2026-04-05 03:09:00.483262 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:00.483270 | orchestrator | 2026-04-05 03:09:00.483276 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 03:09:00.483283 | orchestrator | Sunday 05 April 2026 03:08:44 +0000 (0:00:00.572) 0:09:42.357 ********** 2026-04-05 03:09:00.483290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:00.483368 | orchestrator | 2026-04-05 03:09:00.483375 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 03:09:00.483382 | orchestrator | Sunday 05 April 2026 03:08:45 +0000 (0:00:00.857) 0:09:43.215 ********** 2026-04-05 03:09:00.483389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.483397 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.483404 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.483433 | orchestrator | 2026-04-05 03:09:00.483440 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 03:09:00.483447 | orchestrator | Sunday 05 April 2026 03:08:45 +0000 (0:00:00.353) 0:09:43.568 ********** 2026-04-05 03:09:00.483454 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483461 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.483467 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.483474 | orchestrator | 2026-04-05 03:09:00.483481 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 03:09:00.483487 | orchestrator | Sunday 05 April 2026 03:08:46 +0000 (0:00:00.750) 0:09:44.319 ********** 2026-04-05 03:09:00.483494 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483512 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.483522 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.483530 | orchestrator | 2026-04-05 03:09:00.483539 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 03:09:00.483548 | orchestrator | Sunday 05 April 2026 03:08:47 +0000 (0:00:01.075) 0:09:45.394 ********** 2026-04-05 03:09:00.483556 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483564 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.483572 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.483579 | orchestrator | 2026-04-05 03:09:00.483587 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 03:09:00.483595 | orchestrator | Sunday 05 April 2026 03:08:48 +0000 (0:00:00.754) 0:09:46.149 ********** 2026-04-05 03:09:00.483603 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.483611 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.483620 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.483628 | orchestrator | 2026-04-05 03:09:00.483636 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 03:09:00.483645 | orchestrator | Sunday 05 April 2026 03:08:48 +0000 (0:00:00.354) 0:09:46.503 ********** 2026-04-05 03:09:00.483653 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.483661 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.483670 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.483679 | orchestrator | 2026-04-05 03:09:00.483687 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 03:09:00.483695 | orchestrator | Sunday 05 April 2026 03:08:48 +0000 (0:00:00.349) 0:09:46.852 ********** 2026-04-05 03:09:00.483704 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.483712 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.483720 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.483729 | orchestrator | 2026-04-05 03:09:00.483737 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 03:09:00.483745 | orchestrator | Sunday 05 April 2026 03:08:49 +0000 (0:00:00.625) 0:09:47.478 ********** 2026-04-05 03:09:00.483754 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483762 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.483782 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.483791 | orchestrator | 2026-04-05 03:09:00.483800 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 03:09:00.483808 | orchestrator | Sunday 05 April 2026 03:08:50 +0000 (0:00:00.777) 0:09:48.256 ********** 2026-04-05 03:09:00.483816 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483824 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.483832 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.483841 | orchestrator | 2026-04-05 03:09:00.483848 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 03:09:00.483855 | orchestrator | Sunday 05 April 2026 03:08:50 +0000 (0:00:00.746) 0:09:49.002 ********** 2026-04-05 03:09:00.483862 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.483868 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.483875 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.483882 | orchestrator | 2026-04-05 03:09:00.483888 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 03:09:00.483901 | orchestrator | Sunday 05 April 2026 03:08:51 +0000 (0:00:00.330) 0:09:49.333 ********** 2026-04-05 03:09:00.483911 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.483922 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.483934 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.483953 | orchestrator | 2026-04-05 03:09:00.483965 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 03:09:00.483976 | orchestrator | Sunday 05 April 2026 03:08:51 +0000 (0:00:00.673) 0:09:50.007 ********** 2026-04-05 03:09:00.483988 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.483999 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.484008 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.484019 | orchestrator | 2026-04-05 03:09:00.484030 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 03:09:00.484042 | orchestrator | Sunday 05 April 2026 03:08:52 +0000 (0:00:00.361) 0:09:50.368 ********** 2026-04-05 03:09:00.484054 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.484065 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.484077 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.484089 | orchestrator | 2026-04-05 03:09:00.484100 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 03:09:00.484112 | orchestrator | Sunday 05 April 2026 03:08:52 +0000 (0:00:00.367) 0:09:50.736 ********** 2026-04-05 03:09:00.484124 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.484135 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.484142 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.484149 | orchestrator | 2026-04-05 03:09:00.484155 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 03:09:00.484162 | orchestrator | Sunday 05 April 2026 03:08:53 +0000 (0:00:00.357) 0:09:51.093 ********** 2026-04-05 03:09:00.484169 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.484176 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.484182 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.484189 | orchestrator | 2026-04-05 03:09:00.484196 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 03:09:00.484203 | orchestrator | Sunday 05 April 2026 03:08:53 +0000 (0:00:00.614) 0:09:51.708 ********** 2026-04-05 03:09:00.484209 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.484216 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.484223 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.484229 | orchestrator | 2026-04-05 03:09:00.484236 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 03:09:00.484243 | orchestrator | Sunday 05 April 2026 03:08:54 +0000 (0:00:00.362) 0:09:52.071 ********** 2026-04-05 03:09:00.484249 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:00.484256 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:00.484263 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:00.484269 | orchestrator | 2026-04-05 03:09:00.484276 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 03:09:00.484283 | orchestrator | Sunday 05 April 2026 03:08:54 +0000 (0:00:00.396) 0:09:52.467 ********** 2026-04-05 03:09:00.484289 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.484312 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.484319 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.484325 | orchestrator | 2026-04-05 03:09:00.484338 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 03:09:00.484345 | orchestrator | Sunday 05 April 2026 03:08:54 +0000 (0:00:00.377) 0:09:52.845 ********** 2026-04-05 03:09:00.484352 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:00.484358 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:00.484365 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:00.484371 | orchestrator | 2026-04-05 03:09:00.484378 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-05 03:09:00.484385 | orchestrator | Sunday 05 April 2026 03:08:55 +0000 (0:00:00.949) 0:09:53.794 ********** 2026-04-05 03:09:00.484399 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:00.484406 | orchestrator | 2026-04-05 03:09:00.484413 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 03:09:00.484419 | orchestrator | Sunday 05 April 2026 03:08:56 +0000 (0:00:00.567) 0:09:54.361 ********** 2026-04-05 03:09:00.484426 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:09:00.484433 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 03:09:00.484440 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 03:09:00.484447 | orchestrator | 2026-04-05 03:09:00.484454 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 03:09:00.484460 | orchestrator | Sunday 05 April 2026 03:08:58 +0000 (0:00:02.522) 0:09:56.884 ********** 2026-04-05 03:09:00.484467 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 03:09:00.484497 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 03:09:00.484505 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:09:00.484512 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 03:09:00.484518 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 03:09:00.484525 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:09:00.484539 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 03:09:50.972047 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 03:09:50.972224 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:09:50.972246 | orchestrator | 2026-04-05 03:09:50.972262 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-05 03:09:50.972389 | orchestrator | Sunday 05 April 2026 03:09:00 +0000 (0:00:01.641) 0:09:58.525 ********** 2026-04-05 03:09:50.972406 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:50.972420 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:50.972434 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:50.972449 | orchestrator | 2026-04-05 03:09:50.972462 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-05 03:09:50.972477 | orchestrator | Sunday 05 April 2026 03:09:00 +0000 (0:00:00.359) 0:09:58.885 ********** 2026-04-05 03:09:50.972493 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:50.972508 | orchestrator | 2026-04-05 03:09:50.972523 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-05 03:09:50.972539 | orchestrator | Sunday 05 April 2026 03:09:01 +0000 (0:00:00.851) 0:09:59.736 ********** 2026-04-05 03:09:50.972558 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 03:09:50.972578 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 03:09:50.972594 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 03:09:50.972609 | orchestrator | 2026-04-05 03:09:50.972624 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-05 03:09:50.972639 | orchestrator | Sunday 05 April 2026 03:09:02 +0000 (0:00:00.859) 0:10:00.596 ********** 2026-04-05 03:09:50.972656 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:09:50.972673 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 03:09:50.972690 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:09:50.972704 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 03:09:50.972756 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:09:50.972775 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 03:09:50.972791 | orchestrator | 2026-04-05 03:09:50.972806 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 03:09:50.972821 | orchestrator | Sunday 05 April 2026 03:09:07 +0000 (0:00:04.540) 0:10:05.136 ********** 2026-04-05 03:09:50.972836 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:09:50.972852 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 03:09:50.972866 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:09:50.972881 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 03:09:50.972895 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:09:50.972931 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 03:09:50.972946 | orchestrator | 2026-04-05 03:09:50.972960 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 03:09:50.972974 | orchestrator | Sunday 05 April 2026 03:09:09 +0000 (0:00:02.571) 0:10:07.707 ********** 2026-04-05 03:09:50.972989 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 03:09:50.973005 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:09:50.973019 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 03:09:50.973034 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:09:50.973048 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 03:09:50.973063 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:09:50.973078 | orchestrator | 2026-04-05 03:09:50.973093 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-05 03:09:50.973106 | orchestrator | Sunday 05 April 2026 03:09:11 +0000 (0:00:01.550) 0:10:09.258 ********** 2026-04-05 03:09:50.973119 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-05 03:09:50.973132 | orchestrator | 2026-04-05 03:09:50.973144 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-05 03:09:50.973160 | orchestrator | Sunday 05 April 2026 03:09:11 +0000 (0:00:00.251) 0:10:09.510 ********** 2026-04-05 03:09:50.973175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973311 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:50.973326 | orchestrator | 2026-04-05 03:09:50.973341 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-05 03:09:50.973356 | orchestrator | Sunday 05 April 2026 03:09:12 +0000 (0:00:00.684) 0:10:10.195 ********** 2026-04-05 03:09:50.973370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 03:09:50.973453 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:50.973469 | orchestrator | 2026-04-05 03:09:50.973484 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-05 03:09:50.973499 | orchestrator | Sunday 05 April 2026 03:09:12 +0000 (0:00:00.653) 0:10:10.849 ********** 2026-04-05 03:09:50.973514 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 03:09:50.973528 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 03:09:50.973541 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 03:09:50.973555 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 03:09:50.973569 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 03:09:50.973583 | orchestrator | 2026-04-05 03:09:50.973598 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-05 03:09:50.973613 | orchestrator | Sunday 05 April 2026 03:09:43 +0000 (0:00:30.994) 0:10:41.844 ********** 2026-04-05 03:09:50.973629 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:50.973643 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:50.973657 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:50.973671 | orchestrator | 2026-04-05 03:09:50.973686 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-05 03:09:50.973701 | orchestrator | Sunday 05 April 2026 03:09:44 +0000 (0:00:00.361) 0:10:42.206 ********** 2026-04-05 03:09:50.973716 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:50.973730 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:50.973744 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:50.973759 | orchestrator | 2026-04-05 03:09:50.973781 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-05 03:09:50.973795 | orchestrator | Sunday 05 April 2026 03:09:44 +0000 (0:00:00.465) 0:10:42.671 ********** 2026-04-05 03:09:50.973809 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:50.973823 | orchestrator | 2026-04-05 03:09:50.973837 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-05 03:09:50.973851 | orchestrator | Sunday 05 April 2026 03:09:45 +0000 (0:00:00.926) 0:10:43.598 ********** 2026-04-05 03:09:50.973865 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:50.973879 | orchestrator | 2026-04-05 03:09:50.973893 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-05 03:09:50.973906 | orchestrator | Sunday 05 April 2026 03:09:46 +0000 (0:00:00.870) 0:10:44.469 ********** 2026-04-05 03:09:50.973921 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:09:50.973935 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:09:50.973948 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:09:50.973961 | orchestrator | 2026-04-05 03:09:50.973974 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-05 03:09:50.973987 | orchestrator | Sunday 05 April 2026 03:09:47 +0000 (0:00:01.365) 0:10:45.834 ********** 2026-04-05 03:09:50.974012 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:09:50.974114 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:09:50.974127 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:09:50.974141 | orchestrator | 2026-04-05 03:09:50.974155 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-05 03:09:50.974169 | orchestrator | Sunday 05 April 2026 03:09:49 +0000 (0:00:01.328) 0:10:47.163 ********** 2026-04-05 03:09:50.974182 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:09:50.974196 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:09:50.974210 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:09:50.974223 | orchestrator | 2026-04-05 03:09:50.974252 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-05 03:09:58.116159 | orchestrator | Sunday 05 April 2026 03:09:50 +0000 (0:00:01.846) 0:10:49.009 ********** 2026-04-05 03:09:58.116347 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 03:09:58.116370 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 03:09:58.116383 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 03:09:58.116409 | orchestrator | 2026-04-05 03:09:58.116432 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-05 03:09:58.116444 | orchestrator | Sunday 05 April 2026 03:09:53 +0000 (0:00:02.868) 0:10:51.878 ********** 2026-04-05 03:09:58.116455 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:58.116468 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:58.116479 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:58.116492 | orchestrator | 2026-04-05 03:09:58.116511 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-05 03:09:58.116529 | orchestrator | Sunday 05 April 2026 03:09:54 +0000 (0:00:00.464) 0:10:52.342 ********** 2026-04-05 03:09:58.116548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:09:58.116568 | orchestrator | 2026-04-05 03:09:58.116588 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-05 03:09:58.116606 | orchestrator | Sunday 05 April 2026 03:09:55 +0000 (0:00:00.916) 0:10:53.259 ********** 2026-04-05 03:09:58.116625 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:58.116638 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:58.116649 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:58.116660 | orchestrator | 2026-04-05 03:09:58.116671 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-05 03:09:58.116682 | orchestrator | Sunday 05 April 2026 03:09:55 +0000 (0:00:00.365) 0:10:53.624 ********** 2026-04-05 03:09:58.116695 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:58.116708 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:09:58.116721 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:09:58.116735 | orchestrator | 2026-04-05 03:09:58.116748 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-05 03:09:58.116761 | orchestrator | Sunday 05 April 2026 03:09:55 +0000 (0:00:00.360) 0:10:53.985 ********** 2026-04-05 03:09:58.116774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:09:58.116788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:09:58.116801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:09:58.116814 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:09:58.116827 | orchestrator | 2026-04-05 03:09:58.116841 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-05 03:09:58.116854 | orchestrator | Sunday 05 April 2026 03:09:56 +0000 (0:00:01.066) 0:10:55.052 ********** 2026-04-05 03:09:58.116868 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:09:58.116881 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:09:58.116921 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:09:58.116935 | orchestrator | 2026-04-05 03:09:58.116948 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:09:58.116962 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-05 03:09:58.116992 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-05 03:09:58.117006 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-05 03:09:58.117020 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-05 03:09:58.117033 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-05 03:09:58.117045 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-05 03:09:58.117056 | orchestrator | 2026-04-05 03:09:58.117067 | orchestrator | 2026-04-05 03:09:58.117077 | orchestrator | 2026-04-05 03:09:58.117088 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:09:58.117099 | orchestrator | Sunday 05 April 2026 03:09:57 +0000 (0:00:00.593) 0:10:55.646 ********** 2026-04-05 03:09:58.117110 | orchestrator | =============================================================================== 2026-04-05 03:09:58.117121 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 59.38s 2026-04-05 03:09:58.117132 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.22s 2026-04-05 03:09:58.117142 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.00s 2026-04-05 03:09:58.117153 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.43s 2026-04-05 03:09:58.117164 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.90s 2026-04-05 03:09:58.117175 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.80s 2026-04-05 03:09:58.117204 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.84s 2026-04-05 03:09:58.117216 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.65s 2026-04-05 03:09:58.117226 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.67s 2026-04-05 03:09:58.117237 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.76s 2026-04-05 03:09:58.117248 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.70s 2026-04-05 03:09:58.117259 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.39s 2026-04-05 03:09:58.117303 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.54s 2026-04-05 03:09:58.117314 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.18s 2026-04-05 03:09:58.117325 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.00s 2026-04-05 03:09:58.117337 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.78s 2026-04-05 03:09:58.117348 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.69s 2026-04-05 03:09:58.117359 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.48s 2026-04-05 03:09:58.117370 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.33s 2026-04-05 03:09:58.117381 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.05s 2026-04-05 03:10:00.728397 | orchestrator | 2026-04-05 03:10:00 | INFO  | Task 5282d4a2-7869-479c-8db4-3771b6319e5f (ceph-pools) was prepared for execution. 2026-04-05 03:10:00.728540 | orchestrator | 2026-04-05 03:10:00 | INFO  | It takes a moment until task 5282d4a2-7869-479c-8db4-3771b6319e5f (ceph-pools) has been started and output is visible here. 2026-04-05 03:10:16.474570 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 03:10:16.474667 | orchestrator | 2.16.14 2026-04-05 03:10:16.474687 | orchestrator | 2026-04-05 03:10:16.474702 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-05 03:10:16.474716 | orchestrator | 2026-04-05 03:10:16.474737 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 03:10:16.474753 | orchestrator | Sunday 05 April 2026 03:10:05 +0000 (0:00:00.695) 0:00:00.695 ********** 2026-04-05 03:10:16.474766 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:10:16.474781 | orchestrator | 2026-04-05 03:10:16.474793 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 03:10:16.474805 | orchestrator | Sunday 05 April 2026 03:10:06 +0000 (0:00:00.747) 0:00:01.442 ********** 2026-04-05 03:10:16.474818 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.474831 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.474843 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.474855 | orchestrator | 2026-04-05 03:10:16.474868 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 03:10:16.474881 | orchestrator | Sunday 05 April 2026 03:10:07 +0000 (0:00:00.688) 0:00:02.131 ********** 2026-04-05 03:10:16.474893 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.474906 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.474919 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.474931 | orchestrator | 2026-04-05 03:10:16.474945 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 03:10:16.474958 | orchestrator | Sunday 05 April 2026 03:10:07 +0000 (0:00:00.322) 0:00:02.453 ********** 2026-04-05 03:10:16.474972 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.474985 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.474998 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.475011 | orchestrator | 2026-04-05 03:10:16.475041 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 03:10:16.475056 | orchestrator | Sunday 05 April 2026 03:10:08 +0000 (0:00:00.959) 0:00:03.413 ********** 2026-04-05 03:10:16.475067 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.475079 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.475092 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.475105 | orchestrator | 2026-04-05 03:10:16.475116 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 03:10:16.475128 | orchestrator | Sunday 05 April 2026 03:10:08 +0000 (0:00:00.356) 0:00:03.770 ********** 2026-04-05 03:10:16.475140 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.475154 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.475166 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.475180 | orchestrator | 2026-04-05 03:10:16.475193 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 03:10:16.475205 | orchestrator | Sunday 05 April 2026 03:10:09 +0000 (0:00:00.330) 0:00:04.100 ********** 2026-04-05 03:10:16.475217 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.475230 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.475242 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.475284 | orchestrator | 2026-04-05 03:10:16.475300 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 03:10:16.475315 | orchestrator | Sunday 05 April 2026 03:10:09 +0000 (0:00:00.361) 0:00:04.462 ********** 2026-04-05 03:10:16.475330 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:16.475346 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:16.475362 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:16.475377 | orchestrator | 2026-04-05 03:10:16.475392 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 03:10:16.475439 | orchestrator | Sunday 05 April 2026 03:10:10 +0000 (0:00:00.567) 0:00:05.029 ********** 2026-04-05 03:10:16.475458 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.475490 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.475505 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.475519 | orchestrator | 2026-04-05 03:10:16.475534 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 03:10:16.475548 | orchestrator | Sunday 05 April 2026 03:10:10 +0000 (0:00:00.344) 0:00:05.373 ********** 2026-04-05 03:10:16.475559 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 03:10:16.475568 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 03:10:16.475577 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 03:10:16.475585 | orchestrator | 2026-04-05 03:10:16.475594 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 03:10:16.475602 | orchestrator | Sunday 05 April 2026 03:10:11 +0000 (0:00:00.723) 0:00:06.096 ********** 2026-04-05 03:10:16.475611 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:16.475619 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:16.475628 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:16.475636 | orchestrator | 2026-04-05 03:10:16.475645 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 03:10:16.475653 | orchestrator | Sunday 05 April 2026 03:10:11 +0000 (0:00:00.468) 0:00:06.565 ********** 2026-04-05 03:10:16.475662 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 03:10:16.475670 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 03:10:16.475679 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 03:10:16.475688 | orchestrator | 2026-04-05 03:10:16.475696 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 03:10:16.475705 | orchestrator | Sunday 05 April 2026 03:10:14 +0000 (0:00:02.357) 0:00:08.922 ********** 2026-04-05 03:10:16.475714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 03:10:16.475723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 03:10:16.475732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 03:10:16.475745 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:16.475760 | orchestrator | 2026-04-05 03:10:16.475800 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 03:10:16.475815 | orchestrator | Sunday 05 April 2026 03:10:14 +0000 (0:00:00.827) 0:00:09.749 ********** 2026-04-05 03:10:16.475828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 03:10:16.475844 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 03:10:16.475860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 03:10:16.475873 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:16.475885 | orchestrator | 2026-04-05 03:10:16.475900 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 03:10:16.475915 | orchestrator | Sunday 05 April 2026 03:10:16 +0000 (0:00:01.128) 0:00:10.878 ********** 2026-04-05 03:10:16.475943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:16.475975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:16.475990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:16.476006 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:16.476033 | orchestrator | 2026-04-05 03:10:16.476050 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 03:10:16.476059 | orchestrator | Sunday 05 April 2026 03:10:16 +0000 (0:00:00.200) 0:00:11.079 ********** 2026-04-05 03:10:16.476075 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b58ad7ef29db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 03:10:12.708557', 'end': '2026-04-05 03:10:12.761042', 'delta': '0:00:00.052485', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b58ad7ef29db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 03:10:16.476093 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0027b45af4f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 03:10:13.295899', 'end': '2026-04-05 03:10:13.355726', 'delta': '0:00:00.059827', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0027b45af4f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 03:10:16.476122 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd0e8f8775caf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 03:10:13.881005', 'end': '2026-04-05 03:10:13.925657', 'delta': '0:00:00.044652', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0e8f8775caf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 03:10:24.137172 | orchestrator | 2026-04-05 03:10:24.137370 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 03:10:24.137390 | orchestrator | Sunday 05 April 2026 03:10:16 +0000 (0:00:00.220) 0:00:11.299 ********** 2026-04-05 03:10:24.137425 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:24.137437 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:24.137446 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:24.137456 | orchestrator | 2026-04-05 03:10:24.137466 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 03:10:24.137476 | orchestrator | Sunday 05 April 2026 03:10:16 +0000 (0:00:00.517) 0:00:11.817 ********** 2026-04-05 03:10:24.137486 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-05 03:10:24.137496 | orchestrator | 2026-04-05 03:10:24.137520 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 03:10:24.137531 | orchestrator | Sunday 05 April 2026 03:10:18 +0000 (0:00:01.816) 0:00:13.633 ********** 2026-04-05 03:10:24.137540 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.137550 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.137560 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.137570 | orchestrator | 2026-04-05 03:10:24.137579 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 03:10:24.137589 | orchestrator | Sunday 05 April 2026 03:10:19 +0000 (0:00:00.326) 0:00:13.960 ********** 2026-04-05 03:10:24.137599 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.137608 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.137618 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.137628 | orchestrator | 2026-04-05 03:10:24.137637 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 03:10:24.137647 | orchestrator | Sunday 05 April 2026 03:10:20 +0000 (0:00:00.959) 0:00:14.919 ********** 2026-04-05 03:10:24.137656 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.137666 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.137675 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.137685 | orchestrator | 2026-04-05 03:10:24.137698 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 03:10:24.137710 | orchestrator | Sunday 05 April 2026 03:10:20 +0000 (0:00:00.336) 0:00:15.256 ********** 2026-04-05 03:10:24.137722 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:24.137733 | orchestrator | 2026-04-05 03:10:24.137744 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 03:10:24.137755 | orchestrator | Sunday 05 April 2026 03:10:20 +0000 (0:00:00.150) 0:00:15.407 ********** 2026-04-05 03:10:24.137767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.137778 | orchestrator | 2026-04-05 03:10:24.137789 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 03:10:24.137801 | orchestrator | Sunday 05 April 2026 03:10:20 +0000 (0:00:00.359) 0:00:15.766 ********** 2026-04-05 03:10:24.137812 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.137823 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.137834 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.137845 | orchestrator | 2026-04-05 03:10:24.137856 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 03:10:24.137868 | orchestrator | Sunday 05 April 2026 03:10:21 +0000 (0:00:00.314) 0:00:16.080 ********** 2026-04-05 03:10:24.137879 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.137890 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.137901 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.137913 | orchestrator | 2026-04-05 03:10:24.137928 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 03:10:24.137945 | orchestrator | Sunday 05 April 2026 03:10:21 +0000 (0:00:00.357) 0:00:16.438 ********** 2026-04-05 03:10:24.137962 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.137983 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.138006 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.138099 | orchestrator | 2026-04-05 03:10:24.138118 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 03:10:24.138135 | orchestrator | Sunday 05 April 2026 03:10:22 +0000 (0:00:00.602) 0:00:17.041 ********** 2026-04-05 03:10:24.138165 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.138182 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.138198 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.138214 | orchestrator | 2026-04-05 03:10:24.138224 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 03:10:24.138234 | orchestrator | Sunday 05 April 2026 03:10:22 +0000 (0:00:00.361) 0:00:17.402 ********** 2026-04-05 03:10:24.138244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.138285 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.138295 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.138309 | orchestrator | 2026-04-05 03:10:24.138326 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 03:10:24.138341 | orchestrator | Sunday 05 April 2026 03:10:22 +0000 (0:00:00.344) 0:00:17.746 ********** 2026-04-05 03:10:24.138357 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.138372 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.138389 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.138408 | orchestrator | 2026-04-05 03:10:24.138425 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 03:10:24.138443 | orchestrator | Sunday 05 April 2026 03:10:23 +0000 (0:00:00.601) 0:00:18.348 ********** 2026-04-05 03:10:24.138456 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.138466 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.138476 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.138485 | orchestrator | 2026-04-05 03:10:24.138495 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 03:10:24.138504 | orchestrator | Sunday 05 April 2026 03:10:23 +0000 (0:00:00.395) 0:00:18.743 ********** 2026-04-05 03:10:24.138538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.138693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.229961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.230145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.230172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.230217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.230330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.230345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.230355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.230372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.230382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.230392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.230402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.230418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.423774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.423842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.423847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.423867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.423883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.423892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.423897 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:24.423906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.423912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.423917 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:24.423921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.423926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.423930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.423937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.668328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.668429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.668471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.668483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.668496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.668506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-05 03:10:24.668547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.668571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.668583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.668594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.668606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-05 03:10:24.668618 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:24.668630 | orchestrator | 2026-04-05 03:10:24.668641 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 03:10:24.668652 | orchestrator | Sunday 05 April 2026 03:10:24 +0000 (0:00:00.653) 0:00:19.396 ********** 2026-04-05 03:10:24.668674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.798684 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896129 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896330 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896353 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896412 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:24.896433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026472 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:25.026567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026626 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026669 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026682 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:25.026688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.026718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156793 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156813 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156883 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:25.156983 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:38.018411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-05-01-47-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-05 03:10:38.018538 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.018552 | orchestrator | 2026-04-05 03:10:38.018562 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 03:10:38.018572 | orchestrator | Sunday 05 April 2026 03:10:25 +0000 (0:00:00.683) 0:00:20.079 ********** 2026-04-05 03:10:38.018580 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:38.018589 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:38.018597 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:38.018605 | orchestrator | 2026-04-05 03:10:38.018613 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 03:10:38.018621 | orchestrator | Sunday 05 April 2026 03:10:26 +0000 (0:00:00.936) 0:00:21.016 ********** 2026-04-05 03:10:38.018629 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:38.018637 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:38.018645 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:38.018652 | orchestrator | 2026-04-05 03:10:38.018661 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 03:10:38.018669 | orchestrator | Sunday 05 April 2026 03:10:26 +0000 (0:00:00.330) 0:00:21.347 ********** 2026-04-05 03:10:38.018676 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:38.018685 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:38.018693 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:38.018701 | orchestrator | 2026-04-05 03:10:38.018721 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 03:10:38.018730 | orchestrator | Sunday 05 April 2026 03:10:27 +0000 (0:00:00.694) 0:00:22.041 ********** 2026-04-05 03:10:38.018738 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.018746 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:38.018754 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.018762 | orchestrator | 2026-04-05 03:10:38.018770 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 03:10:38.018778 | orchestrator | Sunday 05 April 2026 03:10:27 +0000 (0:00:00.345) 0:00:22.387 ********** 2026-04-05 03:10:38.018788 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.018801 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:38.018814 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.018827 | orchestrator | 2026-04-05 03:10:38.018839 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 03:10:38.018852 | orchestrator | Sunday 05 April 2026 03:10:28 +0000 (0:00:00.731) 0:00:23.118 ********** 2026-04-05 03:10:38.018866 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.018879 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:38.018894 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.018908 | orchestrator | 2026-04-05 03:10:38.018922 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 03:10:38.018936 | orchestrator | Sunday 05 April 2026 03:10:28 +0000 (0:00:00.339) 0:00:23.458 ********** 2026-04-05 03:10:38.018950 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 03:10:38.018960 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 03:10:38.018973 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 03:10:38.018986 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 03:10:38.018999 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 03:10:38.019011 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 03:10:38.019024 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 03:10:38.019047 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 03:10:38.019060 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 03:10:38.019075 | orchestrator | 2026-04-05 03:10:38.019087 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 03:10:38.019101 | orchestrator | Sunday 05 April 2026 03:10:29 +0000 (0:00:01.079) 0:00:24.538 ********** 2026-04-05 03:10:38.019116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 03:10:38.019130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 03:10:38.019143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 03:10:38.019154 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.019163 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 03:10:38.019170 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 03:10:38.019178 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 03:10:38.019186 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:38.019194 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 03:10:38.019202 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 03:10:38.019210 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 03:10:38.019217 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.019225 | orchestrator | 2026-04-05 03:10:38.019233 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 03:10:38.019241 | orchestrator | Sunday 05 April 2026 03:10:30 +0000 (0:00:00.467) 0:00:25.005 ********** 2026-04-05 03:10:38.019329 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:10:38.019338 | orchestrator | 2026-04-05 03:10:38.019347 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 03:10:38.019357 | orchestrator | Sunday 05 April 2026 03:10:31 +0000 (0:00:00.856) 0:00:25.862 ********** 2026-04-05 03:10:38.019365 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.019373 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:38.019380 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.019388 | orchestrator | 2026-04-05 03:10:38.019396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 03:10:38.019404 | orchestrator | Sunday 05 April 2026 03:10:31 +0000 (0:00:00.374) 0:00:26.236 ********** 2026-04-05 03:10:38.019412 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.019420 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:38.019427 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.019435 | orchestrator | 2026-04-05 03:10:38.019443 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 03:10:38.019451 | orchestrator | Sunday 05 April 2026 03:10:31 +0000 (0:00:00.347) 0:00:26.584 ********** 2026-04-05 03:10:38.019459 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.019466 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:10:38.019474 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:10:38.019482 | orchestrator | 2026-04-05 03:10:38.019490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 03:10:38.019498 | orchestrator | Sunday 05 April 2026 03:10:32 +0000 (0:00:00.548) 0:00:27.133 ********** 2026-04-05 03:10:38.019505 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:38.019513 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:38.019521 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:38.019529 | orchestrator | 2026-04-05 03:10:38.019537 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 03:10:38.019545 | orchestrator | Sunday 05 April 2026 03:10:32 +0000 (0:00:00.484) 0:00:27.617 ********** 2026-04-05 03:10:38.019553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:10:38.019568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:10:38.019576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:10:38.019590 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.019598 | orchestrator | 2026-04-05 03:10:38.019606 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 03:10:38.019614 | orchestrator | Sunday 05 April 2026 03:10:33 +0000 (0:00:00.401) 0:00:28.019 ********** 2026-04-05 03:10:38.019622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:10:38.019630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:10:38.019637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:10:38.019645 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.019653 | orchestrator | 2026-04-05 03:10:38.019661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 03:10:38.019669 | orchestrator | Sunday 05 April 2026 03:10:33 +0000 (0:00:00.416) 0:00:28.435 ********** 2026-04-05 03:10:38.019677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 03:10:38.019684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 03:10:38.019692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 03:10:38.019700 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:10:38.019708 | orchestrator | 2026-04-05 03:10:38.019716 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 03:10:38.019724 | orchestrator | Sunday 05 April 2026 03:10:34 +0000 (0:00:00.426) 0:00:28.862 ********** 2026-04-05 03:10:38.019732 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:10:38.019739 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:10:38.019747 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:10:38.019755 | orchestrator | 2026-04-05 03:10:38.019763 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 03:10:38.019771 | orchestrator | Sunday 05 April 2026 03:10:34 +0000 (0:00:00.350) 0:00:29.212 ********** 2026-04-05 03:10:38.019778 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 03:10:38.019786 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 03:10:38.019794 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 03:10:38.019802 | orchestrator | 2026-04-05 03:10:38.019810 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 03:10:38.019818 | orchestrator | Sunday 05 April 2026 03:10:35 +0000 (0:00:00.924) 0:00:30.136 ********** 2026-04-05 03:10:38.019825 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 03:10:38.019833 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 03:10:38.019841 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 03:10:38.019849 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 03:10:38.019856 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 03:10:38.019863 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 03:10:38.019870 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 03:10:38.019877 | orchestrator | 2026-04-05 03:10:38.019883 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 03:10:38.019890 | orchestrator | Sunday 05 April 2026 03:10:36 +0000 (0:00:00.922) 0:00:31.058 ********** 2026-04-05 03:10:38.019896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 03:10:38.019907 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 03:12:21.344894 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 03:12:21.345040 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 03:12:21.345110 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 03:12:21.345130 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 03:12:21.345150 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 03:12:21.345168 | orchestrator | 2026-04-05 03:12:21.345190 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-05 03:12:21.345245 | orchestrator | Sunday 05 April 2026 03:10:38 +0000 (0:00:01.784) 0:00:32.843 ********** 2026-04-05 03:12:21.345264 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:12:21.345276 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:12:21.345288 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-05 03:12:21.345300 | orchestrator | 2026-04-05 03:12:21.345322 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-05 03:12:21.345348 | orchestrator | Sunday 05 April 2026 03:10:38 +0000 (0:00:00.485) 0:00:33.329 ********** 2026-04-05 03:12:21.345370 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 03:12:21.345391 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 03:12:21.345430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 03:12:21.345451 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 03:12:21.345471 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-05 03:12:21.345492 | orchestrator | 2026-04-05 03:12:21.345511 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-05 03:12:21.345530 | orchestrator | Sunday 05 April 2026 03:11:25 +0000 (0:00:47.225) 0:01:20.554 ********** 2026-04-05 03:12:21.345543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345556 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345568 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345594 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345607 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345625 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-05 03:12:21.345642 | orchestrator | 2026-04-05 03:12:21.345671 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-05 03:12:21.345692 | orchestrator | Sunday 05 April 2026 03:11:50 +0000 (0:00:25.103) 0:01:45.658 ********** 2026-04-05 03:12:21.345709 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345742 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345760 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345795 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345812 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345832 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 03:12:21.345849 | orchestrator | 2026-04-05 03:12:21.345867 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-05 03:12:21.345885 | orchestrator | Sunday 05 April 2026 03:12:03 +0000 (0:00:12.219) 0:01:57.878 ********** 2026-04-05 03:12:21.345904 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345950 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:12:21.345965 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:12:21.345977 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.345988 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:12:21.346000 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:12:21.346098 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.346120 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:12:21.346137 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:12:21.346154 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.346172 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:12:21.346189 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:12:21.346244 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.346265 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:12:21.346284 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:12:21.346302 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 03:12:21.346321 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 03:12:21.346340 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 03:12:21.346359 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-05 03:12:21.346375 | orchestrator | 2026-04-05 03:12:21.346386 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:12:21.346411 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-05 03:12:21.346441 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 03:12:21.346461 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 03:12:21.346478 | orchestrator | 2026-04-05 03:12:21.346496 | orchestrator | 2026-04-05 03:12:21.346511 | orchestrator | 2026-04-05 03:12:21.346527 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:12:21.346544 | orchestrator | Sunday 05 April 2026 03:12:20 +0000 (0:00:17.882) 0:02:15.760 ********** 2026-04-05 03:12:21.346560 | orchestrator | =============================================================================== 2026-04-05 03:12:21.346594 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.23s 2026-04-05 03:12:21.346610 | orchestrator | generate keys ---------------------------------------------------------- 25.10s 2026-04-05 03:12:21.346628 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.88s 2026-04-05 03:12:21.346645 | orchestrator | get keys from monitors ------------------------------------------------- 12.22s 2026-04-05 03:12:21.346663 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.36s 2026-04-05 03:12:21.346680 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.82s 2026-04-05 03:12:21.346697 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.78s 2026-04-05 03:12:21.346804 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.13s 2026-04-05 03:12:21.346834 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.08s 2026-04-05 03:12:21.346852 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.96s 2026-04-05 03:12:21.346869 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.96s 2026-04-05 03:12:21.346887 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.94s 2026-04-05 03:12:21.346904 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.92s 2026-04-05 03:12:21.346921 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2026-04-05 03:12:21.346938 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.86s 2026-04-05 03:12:21.346956 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.83s 2026-04-05 03:12:21.346973 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.75s 2026-04-05 03:12:21.346990 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.73s 2026-04-05 03:12:21.347006 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2026-04-05 03:12:21.347022 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-04-05 03:12:23.809683 | orchestrator | 2026-04-05 03:12:23 | INFO  | Task 477e9d7b-7bdb-4d64-b866-f435838397c8 (copy-ceph-keys) was prepared for execution. 2026-04-05 03:12:23.810266 | orchestrator | 2026-04-05 03:12:23 | INFO  | It takes a moment until task 477e9d7b-7bdb-4d64-b866-f435838397c8 (copy-ceph-keys) has been started and output is visible here. 2026-04-05 03:13:04.037440 | orchestrator | 2026-04-05 03:13:04.037549 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-05 03:13:04.037566 | orchestrator | 2026-04-05 03:13:04.037578 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-05 03:13:04.037591 | orchestrator | Sunday 05 April 2026 03:12:28 +0000 (0:00:00.169) 0:00:00.169 ********** 2026-04-05 03:13:04.037604 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 03:13:04.037617 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037628 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037640 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 03:13:04.037652 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037663 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 03:13:04.037675 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 03:13:04.037686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 03:13:04.037722 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 03:13:04.037734 | orchestrator | 2026-04-05 03:13:04.037746 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-05 03:13:04.037757 | orchestrator | Sunday 05 April 2026 03:12:33 +0000 (0:00:04.849) 0:00:05.019 ********** 2026-04-05 03:13:04.037769 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-05 03:13:04.037795 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037806 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037816 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 03:13:04.037827 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037839 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-05 03:13:04.037850 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-05 03:13:04.037860 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-05 03:13:04.037871 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-05 03:13:04.037882 | orchestrator | 2026-04-05 03:13:04.037894 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-05 03:13:04.037904 | orchestrator | Sunday 05 April 2026 03:12:37 +0000 (0:00:04.592) 0:00:09.612 ********** 2026-04-05 03:13:04.037916 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 03:13:04.037927 | orchestrator | 2026-04-05 03:13:04.037937 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-05 03:13:04.037948 | orchestrator | Sunday 05 April 2026 03:12:38 +0000 (0:00:01.035) 0:00:10.648 ********** 2026-04-05 03:13:04.037959 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-05 03:13:04.037970 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037980 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.037991 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 03:13:04.038001 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.038011 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-05 03:13:04.038084 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-05 03:13:04.038094 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-05 03:13:04.038104 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-05 03:13:04.038115 | orchestrator | 2026-04-05 03:13:04.038126 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-05 03:13:04.038137 | orchestrator | Sunday 05 April 2026 03:12:52 +0000 (0:00:14.224) 0:00:24.872 ********** 2026-04-05 03:13:04.038148 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-05 03:13:04.038160 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-05 03:13:04.038172 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 03:13:04.038184 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-05 03:13:04.038238 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 03:13:04.038262 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-05 03:13:04.038273 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-05 03:13:04.038283 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-05 03:13:04.038294 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-05 03:13:04.038305 | orchestrator | 2026-04-05 03:13:04.038317 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-05 03:13:04.038327 | orchestrator | Sunday 05 April 2026 03:12:56 +0000 (0:00:03.304) 0:00:28.177 ********** 2026-04-05 03:13:04.038338 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-05 03:13:04.038351 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.038363 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.038374 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 03:13:04.038385 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-05 03:13:04.038395 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-05 03:13:04.038407 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-05 03:13:04.038418 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-05 03:13:04.038430 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-05 03:13:04.038441 | orchestrator | 2026-04-05 03:13:04.038454 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:13:04.038473 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:13:04.038486 | orchestrator | 2026-04-05 03:13:04.038498 | orchestrator | 2026-04-05 03:13:04.038509 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:13:04.038519 | orchestrator | Sunday 05 April 2026 03:13:03 +0000 (0:00:07.430) 0:00:35.608 ********** 2026-04-05 03:13:04.038530 | orchestrator | =============================================================================== 2026-04-05 03:13:04.038540 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.22s 2026-04-05 03:13:04.038552 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.43s 2026-04-05 03:13:04.038563 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.85s 2026-04-05 03:13:04.038575 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.59s 2026-04-05 03:13:04.038587 | orchestrator | Check if target directories exist --------------------------------------- 3.30s 2026-04-05 03:13:04.038598 | orchestrator | Create share directory -------------------------------------------------- 1.04s 2026-04-05 03:13:16.777714 | orchestrator | 2026-04-05 03:13:16 | INFO  | Task 44c840aa-5084-4d87-82ed-9c5cd1c8b5db (cephclient) was prepared for execution. 2026-04-05 03:13:16.777816 | orchestrator | 2026-04-05 03:13:16 | INFO  | It takes a moment until task 44c840aa-5084-4d87-82ed-9c5cd1c8b5db (cephclient) has been started and output is visible here. 2026-04-05 03:14:21.003631 | orchestrator | 2026-04-05 03:14:21.003742 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-05 03:14:21.003758 | orchestrator | 2026-04-05 03:14:21.003770 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-05 03:14:21.003782 | orchestrator | Sunday 05 April 2026 03:13:21 +0000 (0:00:00.262) 0:00:00.262 ********** 2026-04-05 03:14:21.003793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-05 03:14:21.003807 | orchestrator | 2026-04-05 03:14:21.003840 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-05 03:14:21.003852 | orchestrator | Sunday 05 April 2026 03:13:21 +0000 (0:00:00.275) 0:00:00.537 ********** 2026-04-05 03:14:21.003863 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-05 03:14:21.003874 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-05 03:14:21.003885 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-05 03:14:21.003897 | orchestrator | 2026-04-05 03:14:21.003908 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-05 03:14:21.003919 | orchestrator | Sunday 05 April 2026 03:13:23 +0000 (0:00:01.297) 0:00:01.834 ********** 2026-04-05 03:14:21.003930 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-05 03:14:21.003941 | orchestrator | 2026-04-05 03:14:21.003952 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-05 03:14:21.003963 | orchestrator | Sunday 05 April 2026 03:13:24 +0000 (0:00:01.716) 0:00:03.550 ********** 2026-04-05 03:14:21.003974 | orchestrator | changed: [testbed-manager] 2026-04-05 03:14:21.003985 | orchestrator | 2026-04-05 03:14:21.003996 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-05 03:14:21.004006 | orchestrator | Sunday 05 April 2026 03:13:25 +0000 (0:00:00.991) 0:00:04.542 ********** 2026-04-05 03:14:21.004017 | orchestrator | changed: [testbed-manager] 2026-04-05 03:14:21.004028 | orchestrator | 2026-04-05 03:14:21.004038 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-05 03:14:21.004049 | orchestrator | Sunday 05 April 2026 03:13:26 +0000 (0:00:00.983) 0:00:05.525 ********** 2026-04-05 03:14:21.004060 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-05 03:14:21.004071 | orchestrator | ok: [testbed-manager] 2026-04-05 03:14:21.004082 | orchestrator | 2026-04-05 03:14:21.004092 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-05 03:14:21.004103 | orchestrator | Sunday 05 April 2026 03:14:10 +0000 (0:00:44.228) 0:00:49.754 ********** 2026-04-05 03:14:21.004114 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-05 03:14:21.004125 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-05 03:14:21.004136 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-05 03:14:21.004146 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-05 03:14:21.004157 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-05 03:14:21.004210 | orchestrator | 2026-04-05 03:14:21.004223 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-05 03:14:21.004237 | orchestrator | Sunday 05 April 2026 03:14:15 +0000 (0:00:04.185) 0:00:53.940 ********** 2026-04-05 03:14:21.004250 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-05 03:14:21.004264 | orchestrator | 2026-04-05 03:14:21.004277 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-05 03:14:21.004290 | orchestrator | Sunday 05 April 2026 03:14:15 +0000 (0:00:00.460) 0:00:54.400 ********** 2026-04-05 03:14:21.004303 | orchestrator | skipping: [testbed-manager] 2026-04-05 03:14:21.004315 | orchestrator | 2026-04-05 03:14:21.004327 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-05 03:14:21.004338 | orchestrator | Sunday 05 April 2026 03:14:15 +0000 (0:00:00.136) 0:00:54.537 ********** 2026-04-05 03:14:21.004349 | orchestrator | skipping: [testbed-manager] 2026-04-05 03:14:21.004359 | orchestrator | 2026-04-05 03:14:21.004370 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-05 03:14:21.004381 | orchestrator | Sunday 05 April 2026 03:14:16 +0000 (0:00:00.458) 0:00:54.996 ********** 2026-04-05 03:14:21.004406 | orchestrator | changed: [testbed-manager] 2026-04-05 03:14:21.004418 | orchestrator | 2026-04-05 03:14:21.004428 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-05 03:14:21.004451 | orchestrator | Sunday 05 April 2026 03:14:17 +0000 (0:00:01.569) 0:00:56.565 ********** 2026-04-05 03:14:21.004462 | orchestrator | changed: [testbed-manager] 2026-04-05 03:14:21.004473 | orchestrator | 2026-04-05 03:14:21.004483 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-05 03:14:21.004494 | orchestrator | Sunday 05 April 2026 03:14:18 +0000 (0:00:00.594) 0:00:57.160 ********** 2026-04-05 03:14:21.004505 | orchestrator | changed: [testbed-manager] 2026-04-05 03:14:21.004516 | orchestrator | 2026-04-05 03:14:21.004526 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-05 03:14:21.004537 | orchestrator | Sunday 05 April 2026 03:14:18 +0000 (0:00:00.562) 0:00:57.722 ********** 2026-04-05 03:14:21.004548 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-05 03:14:21.004559 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-05 03:14:21.004570 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-05 03:14:21.004580 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-05 03:14:21.004591 | orchestrator | 2026-04-05 03:14:21.004603 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:14:21.004614 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 03:14:21.004625 | orchestrator | 2026-04-05 03:14:21.004636 | orchestrator | 2026-04-05 03:14:21.004665 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:14:21.004677 | orchestrator | Sunday 05 April 2026 03:14:20 +0000 (0:00:01.639) 0:00:59.362 ********** 2026-04-05 03:14:21.004688 | orchestrator | =============================================================================== 2026-04-05 03:14:21.004698 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 44.23s 2026-04-05 03:14:21.004709 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.19s 2026-04-05 03:14:21.004720 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.72s 2026-04-05 03:14:21.004731 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.64s 2026-04-05 03:14:21.004742 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.57s 2026-04-05 03:14:21.004752 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.30s 2026-04-05 03:14:21.004763 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-04-05 03:14:21.004774 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2026-04-05 03:14:21.004784 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.59s 2026-04-05 03:14:21.004795 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2026-04-05 03:14:21.004806 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-04-05 03:14:21.004817 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.46s 2026-04-05 03:14:21.004828 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.28s 2026-04-05 03:14:21.004838 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-04-05 03:14:23.546777 | orchestrator | 2026-04-05 03:14:23 | INFO  | Task 24b5b7e3-24ad-4d1a-a93c-7d007b4e20b2 (ceph-bootstrap-dashboard) was prepared for execution. 2026-04-05 03:14:23.546852 | orchestrator | 2026-04-05 03:14:23 | INFO  | It takes a moment until task 24b5b7e3-24ad-4d1a-a93c-7d007b4e20b2 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-04-05 03:15:58.455924 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 03:15:58.456034 | orchestrator | 2.16.14 2026-04-05 03:15:58.456050 | orchestrator | 2026-04-05 03:15:58.456062 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-05 03:15:58.456073 | orchestrator | 2026-04-05 03:15:58.456083 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-05 03:15:58.456116 | orchestrator | Sunday 05 April 2026 03:14:28 +0000 (0:00:00.277) 0:00:00.277 ********** 2026-04-05 03:15:58.456126 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456138 | orchestrator | 2026-04-05 03:15:58.456148 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-05 03:15:58.456235 | orchestrator | Sunday 05 April 2026 03:14:30 +0000 (0:00:01.768) 0:00:02.045 ********** 2026-04-05 03:15:58.456248 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456258 | orchestrator | 2026-04-05 03:15:58.456268 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-05 03:15:58.456278 | orchestrator | Sunday 05 April 2026 03:14:31 +0000 (0:00:01.127) 0:00:03.173 ********** 2026-04-05 03:15:58.456287 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456297 | orchestrator | 2026-04-05 03:15:58.456307 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-05 03:15:58.456317 | orchestrator | Sunday 05 April 2026 03:14:32 +0000 (0:00:01.119) 0:00:04.293 ********** 2026-04-05 03:15:58.456327 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456336 | orchestrator | 2026-04-05 03:15:58.456346 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-05 03:15:58.456356 | orchestrator | Sunday 05 April 2026 03:14:33 +0000 (0:00:01.285) 0:00:05.579 ********** 2026-04-05 03:15:58.456366 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456376 | orchestrator | 2026-04-05 03:15:58.456385 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-05 03:15:58.456395 | orchestrator | Sunday 05 April 2026 03:14:34 +0000 (0:00:01.178) 0:00:06.757 ********** 2026-04-05 03:15:58.456419 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456430 | orchestrator | 2026-04-05 03:15:58.456440 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-05 03:15:58.456450 | orchestrator | Sunday 05 April 2026 03:14:35 +0000 (0:00:01.151) 0:00:07.909 ********** 2026-04-05 03:15:58.456461 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456473 | orchestrator | 2026-04-05 03:15:58.456485 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-05 03:15:58.456510 | orchestrator | Sunday 05 April 2026 03:14:38 +0000 (0:00:02.099) 0:00:10.009 ********** 2026-04-05 03:15:58.456522 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456544 | orchestrator | 2026-04-05 03:15:58.456555 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-05 03:15:58.456567 | orchestrator | Sunday 05 April 2026 03:14:39 +0000 (0:00:01.285) 0:00:11.295 ********** 2026-04-05 03:15:58.456579 | orchestrator | changed: [testbed-manager] 2026-04-05 03:15:58.456590 | orchestrator | 2026-04-05 03:15:58.456607 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-05 03:15:58.456625 | orchestrator | Sunday 05 April 2026 03:15:33 +0000 (0:00:53.871) 0:01:05.167 ********** 2026-04-05 03:15:58.456642 | orchestrator | skipping: [testbed-manager] 2026-04-05 03:15:58.456658 | orchestrator | 2026-04-05 03:15:58.456676 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 03:15:58.456695 | orchestrator | 2026-04-05 03:15:58.456713 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 03:15:58.456730 | orchestrator | Sunday 05 April 2026 03:15:33 +0000 (0:00:00.155) 0:01:05.323 ********** 2026-04-05 03:15:58.456745 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:15:58.456757 | orchestrator | 2026-04-05 03:15:58.456770 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 03:15:58.456781 | orchestrator | 2026-04-05 03:15:58.456794 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 03:15:58.456806 | orchestrator | Sunday 05 April 2026 03:15:35 +0000 (0:00:01.892) 0:01:07.215 ********** 2026-04-05 03:15:58.456817 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:15:58.456828 | orchestrator | 2026-04-05 03:15:58.456840 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-05 03:15:58.456861 | orchestrator | 2026-04-05 03:15:58.456872 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-05 03:15:58.456884 | orchestrator | Sunday 05 April 2026 03:15:46 +0000 (0:00:11.300) 0:01:18.516 ********** 2026-04-05 03:15:58.456896 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:15:58.456906 | orchestrator | 2026-04-05 03:15:58.456916 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:15:58.456927 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 03:15:58.456938 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:15:58.456948 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:15:58.456958 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:15:58.456968 | orchestrator | 2026-04-05 03:15:58.456977 | orchestrator | 2026-04-05 03:15:58.456987 | orchestrator | 2026-04-05 03:15:58.456997 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:15:58.457012 | orchestrator | Sunday 05 April 2026 03:15:57 +0000 (0:00:11.432) 0:01:29.948 ********** 2026-04-05 03:15:58.457035 | orchestrator | =============================================================================== 2026-04-05 03:15:58.457053 | orchestrator | Create admin user ------------------------------------------------------ 53.87s 2026-04-05 03:15:58.457090 | orchestrator | Restart ceph manager service ------------------------------------------- 24.63s 2026-04-05 03:15:58.457136 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2026-04-05 03:15:58.457154 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.77s 2026-04-05 03:15:58.457197 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2026-04-05 03:15:58.457215 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.29s 2026-04-05 03:15:58.457230 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.18s 2026-04-05 03:15:58.457247 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.15s 2026-04-05 03:15:58.457258 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.13s 2026-04-05 03:15:58.457267 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.12s 2026-04-05 03:15:58.457277 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-04-05 03:15:58.805947 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-04-05 03:16:00.978685 | orchestrator | 2026-04-05 03:16:00 | INFO  | Task d5c9fdef-240a-4ed3-9a3d-ed52898b85f8 (keystone) was prepared for execution. 2026-04-05 03:16:00.978791 | orchestrator | 2026-04-05 03:16:00 | INFO  | It takes a moment until task d5c9fdef-240a-4ed3-9a3d-ed52898b85f8 (keystone) has been started and output is visible here. 2026-04-05 03:16:08.515399 | orchestrator | 2026-04-05 03:16:08.515504 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:16:08.515520 | orchestrator | 2026-04-05 03:16:08.515531 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:16:08.515557 | orchestrator | Sunday 05 April 2026 03:16:05 +0000 (0:00:00.282) 0:00:00.282 ********** 2026-04-05 03:16:08.515567 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:16:08.515579 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:16:08.515588 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:16:08.515611 | orchestrator | 2026-04-05 03:16:08.515621 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:16:08.515632 | orchestrator | Sunday 05 April 2026 03:16:05 +0000 (0:00:00.351) 0:00:00.634 ********** 2026-04-05 03:16:08.515665 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-05 03:16:08.515676 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-05 03:16:08.515686 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-05 03:16:08.515695 | orchestrator | 2026-04-05 03:16:08.515705 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-05 03:16:08.515715 | orchestrator | 2026-04-05 03:16:08.515725 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 03:16:08.515735 | orchestrator | Sunday 05 April 2026 03:16:06 +0000 (0:00:00.482) 0:00:01.116 ********** 2026-04-05 03:16:08.515746 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:16:08.515757 | orchestrator | 2026-04-05 03:16:08.515766 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-05 03:16:08.515776 | orchestrator | Sunday 05 April 2026 03:16:06 +0000 (0:00:00.627) 0:00:01.744 ********** 2026-04-05 03:16:08.515792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:08.515806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:08.515840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:08.515860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:08.515872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:08.515883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:08.515893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:08.515904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:08.515914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:08.515931 | orchestrator | 2026-04-05 03:16:08.515941 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-05 03:16:08.515957 | orchestrator | Sunday 05 April 2026 03:16:08 +0000 (0:00:01.698) 0:00:03.443 ********** 2026-04-05 03:16:14.619666 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:14.619759 | orchestrator | 2026-04-05 03:16:14.619771 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-05 03:16:14.619794 | orchestrator | Sunday 05 April 2026 03:16:08 +0000 (0:00:00.312) 0:00:03.756 ********** 2026-04-05 03:16:14.619802 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:14.619809 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:14.619817 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:14.619824 | orchestrator | 2026-04-05 03:16:14.619831 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-05 03:16:14.619839 | orchestrator | Sunday 05 April 2026 03:16:09 +0000 (0:00:00.336) 0:00:04.092 ********** 2026-04-05 03:16:14.619846 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:16:14.619854 | orchestrator | 2026-04-05 03:16:14.619861 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 03:16:14.619868 | orchestrator | Sunday 05 April 2026 03:16:10 +0000 (0:00:00.848) 0:00:04.941 ********** 2026-04-05 03:16:14.619876 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:16:14.619883 | orchestrator | 2026-04-05 03:16:14.619890 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-05 03:16:14.619897 | orchestrator | Sunday 05 April 2026 03:16:10 +0000 (0:00:00.590) 0:00:05.532 ********** 2026-04-05 03:16:14.619909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:14.619919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:14.619928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:14.619973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:14.619985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:14.619993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:14.620001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:14.620009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:14.620023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:14.620030 | orchestrator | 2026-04-05 03:16:14.620038 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-05 03:16:14.620046 | orchestrator | Sunday 05 April 2026 03:16:14 +0000 (0:00:03.414) 0:00:08.946 ********** 2026-04-05 03:16:14.620061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:15.429345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:15.429517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:15.429553 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:15.429578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:15.429633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:15.429665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:15.429689 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:15.429731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:15.429745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:15.429756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:15.429775 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:15.429787 | orchestrator | 2026-04-05 03:16:15.429800 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-05 03:16:15.429813 | orchestrator | Sunday 05 April 2026 03:16:14 +0000 (0:00:00.606) 0:00:09.552 ********** 2026-04-05 03:16:15.429828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:15.429847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:15.429869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:18.957763 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:18.957854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:18.957867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:18.957897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:18.957905 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:18.957925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:18.957932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:18.957951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:18.957957 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:18.957964 | orchestrator | 2026-04-05 03:16:18.957970 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-05 03:16:18.957976 | orchestrator | Sunday 05 April 2026 03:16:15 +0000 (0:00:00.804) 0:00:10.357 ********** 2026-04-05 03:16:18.957980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:18.957989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:18.957996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:18.958005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:24.042723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:24.042866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:16:24.042883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:24.042897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:24.042924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:24.042936 | orchestrator | 2026-04-05 03:16:24.042950 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-05 03:16:24.042963 | orchestrator | Sunday 05 April 2026 03:16:18 +0000 (0:00:03.530) 0:00:13.887 ********** 2026-04-05 03:16:24.042996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:24.043010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:24.043031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:24.043043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:24.043060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:24.043080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:27.947959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:27.948116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:27.948141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:16:27.948234 | orchestrator | 2026-04-05 03:16:27.948257 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-05 03:16:27.948278 | orchestrator | Sunday 05 April 2026 03:16:24 +0000 (0:00:05.080) 0:00:18.968 ********** 2026-04-05 03:16:27.948297 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:16:27.948317 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:16:27.948334 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:16:27.948353 | orchestrator | 2026-04-05 03:16:27.948371 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-05 03:16:27.948389 | orchestrator | Sunday 05 April 2026 03:16:25 +0000 (0:00:01.481) 0:00:20.449 ********** 2026-04-05 03:16:27.948407 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:27.948426 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:27.948446 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:27.948465 | orchestrator | 2026-04-05 03:16:27.948486 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-05 03:16:27.948507 | orchestrator | Sunday 05 April 2026 03:16:26 +0000 (0:00:00.855) 0:00:21.305 ********** 2026-04-05 03:16:27.948528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:27.948550 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:27.948571 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:27.948591 | orchestrator | 2026-04-05 03:16:27.948630 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-05 03:16:27.948651 | orchestrator | Sunday 05 April 2026 03:16:26 +0000 (0:00:00.568) 0:00:21.874 ********** 2026-04-05 03:16:27.948672 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:27.948691 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:27.948711 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:27.948732 | orchestrator | 2026-04-05 03:16:27.948755 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-05 03:16:27.948773 | orchestrator | Sunday 05 April 2026 03:16:27 +0000 (0:00:00.388) 0:00:22.262 ********** 2026-04-05 03:16:27.948817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:27.948851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:27.948870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:27.948890 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:27.948909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:27.948948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:27.948966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:27.949000 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:27.949033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-05 03:16:47.572099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 03:16:47.572238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 03:16:47.572252 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:47.572261 | orchestrator | 2026-04-05 03:16:47.572269 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 03:16:47.572278 | orchestrator | Sunday 05 April 2026 03:16:27 +0000 (0:00:00.610) 0:00:22.873 ********** 2026-04-05 03:16:47.572285 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:47.572292 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:47.572298 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:47.572304 | orchestrator | 2026-04-05 03:16:47.572311 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-05 03:16:47.572317 | orchestrator | Sunday 05 April 2026 03:16:28 +0000 (0:00:00.301) 0:00:23.174 ********** 2026-04-05 03:16:47.572323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 03:16:47.572330 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 03:16:47.572336 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 03:16:47.572361 | orchestrator | 2026-04-05 03:16:47.572383 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-05 03:16:47.572389 | orchestrator | Sunday 05 April 2026 03:16:30 +0000 (0:00:01.871) 0:00:25.046 ********** 2026-04-05 03:16:47.572396 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:16:47.572403 | orchestrator | 2026-04-05 03:16:47.572410 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-05 03:16:47.572416 | orchestrator | Sunday 05 April 2026 03:16:31 +0000 (0:00:01.052) 0:00:26.099 ********** 2026-04-05 03:16:47.572423 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:16:47.572430 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:16:47.572437 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:16:47.572443 | orchestrator | 2026-04-05 03:16:47.572450 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-05 03:16:47.572457 | orchestrator | Sunday 05 April 2026 03:16:31 +0000 (0:00:00.635) 0:00:26.734 ********** 2026-04-05 03:16:47.572463 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 03:16:47.572470 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:16:47.572477 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 03:16:47.572484 | orchestrator | 2026-04-05 03:16:47.572490 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-05 03:16:47.572498 | orchestrator | Sunday 05 April 2026 03:16:32 +0000 (0:00:01.072) 0:00:27.807 ********** 2026-04-05 03:16:47.572505 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:16:47.572512 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:16:47.572518 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:16:47.572524 | orchestrator | 2026-04-05 03:16:47.572529 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-05 03:16:47.572535 | orchestrator | Sunday 05 April 2026 03:16:33 +0000 (0:00:00.588) 0:00:28.396 ********** 2026-04-05 03:16:47.572542 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 03:16:47.572548 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 03:16:47.572555 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 03:16:47.572562 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 03:16:47.572568 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 03:16:47.572575 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 03:16:47.572582 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 03:16:47.572589 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 03:16:47.572610 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 03:16:47.572617 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 03:16:47.572623 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 03:16:47.572630 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 03:16:47.572637 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 03:16:47.572644 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 03:16:47.572657 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 03:16:47.572673 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 03:16:47.572698 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 03:16:47.572713 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 03:16:47.572730 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 03:16:47.572746 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 03:16:47.572757 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 03:16:47.572763 | orchestrator | 2026-04-05 03:16:47.572770 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-05 03:16:47.572777 | orchestrator | Sunday 05 April 2026 03:16:42 +0000 (0:00:09.049) 0:00:37.445 ********** 2026-04-05 03:16:47.572783 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 03:16:47.572790 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 03:16:47.572797 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 03:16:47.572803 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 03:16:47.572810 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 03:16:47.572817 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 03:16:47.572823 | orchestrator | 2026-04-05 03:16:47.572830 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-05 03:16:47.572840 | orchestrator | Sunday 05 April 2026 03:16:45 +0000 (0:00:02.715) 0:00:40.161 ********** 2026-04-05 03:16:47.572850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:16:47.572862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:18:28.447383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-05 03:18:28.447608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:18:28.447672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:18:28.447688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 03:18:28.447700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:18:28.447732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:18:28.447761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 03:18:28.447781 | orchestrator | 2026-04-05 03:18:28.447801 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 03:18:28.447822 | orchestrator | Sunday 05 April 2026 03:16:47 +0000 (0:00:02.333) 0:00:42.495 ********** 2026-04-05 03:18:28.447839 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:18:28.447859 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:18:28.447876 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:18:28.447895 | orchestrator | 2026-04-05 03:18:28.447915 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-05 03:18:28.447934 | orchestrator | Sunday 05 April 2026 03:16:48 +0000 (0:00:00.541) 0:00:43.036 ********** 2026-04-05 03:18:28.447954 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:18:28.447973 | orchestrator | 2026-04-05 03:18:28.447993 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-05 03:18:28.448014 | orchestrator | Sunday 05 April 2026 03:16:50 +0000 (0:00:02.340) 0:00:45.377 ********** 2026-04-05 03:18:28.448032 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:18:28.448051 | orchestrator | 2026-04-05 03:18:28.448071 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-05 03:18:28.448089 | orchestrator | Sunday 05 April 2026 03:16:52 +0000 (0:00:02.278) 0:00:47.656 ********** 2026-04-05 03:18:28.448108 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:18:28.448128 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:18:28.448189 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:18:28.448211 | orchestrator | 2026-04-05 03:18:28.448231 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-05 03:18:28.448249 | orchestrator | Sunday 05 April 2026 03:16:53 +0000 (0:00:00.862) 0:00:48.519 ********** 2026-04-05 03:18:28.448268 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:18:28.448285 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:18:28.448302 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:18:28.448320 | orchestrator | 2026-04-05 03:18:28.448340 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-05 03:18:28.448370 | orchestrator | Sunday 05 April 2026 03:16:53 +0000 (0:00:00.341) 0:00:48.861 ********** 2026-04-05 03:18:28.448389 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:18:28.448407 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:18:28.448427 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:18:28.448446 | orchestrator | 2026-04-05 03:18:28.448465 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-05 03:18:28.448483 | orchestrator | Sunday 05 April 2026 03:16:54 +0000 (0:00:00.576) 0:00:49.437 ********** 2026-04-05 03:18:28.448502 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:18:28.448520 | orchestrator | 2026-04-05 03:18:28.448539 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-05 03:18:28.448557 | orchestrator | Sunday 05 April 2026 03:17:09 +0000 (0:00:14.966) 0:01:04.403 ********** 2026-04-05 03:18:28.448575 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:18:28.448593 | orchestrator | 2026-04-05 03:18:28.448612 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 03:18:28.448630 | orchestrator | Sunday 05 April 2026 03:17:21 +0000 (0:00:11.579) 0:01:15.983 ********** 2026-04-05 03:18:28.448663 | orchestrator | 2026-04-05 03:18:28.448681 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 03:18:28.448701 | orchestrator | Sunday 05 April 2026 03:17:21 +0000 (0:00:00.085) 0:01:16.069 ********** 2026-04-05 03:18:28.448720 | orchestrator | 2026-04-05 03:18:28.448738 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 03:18:28.448752 | orchestrator | Sunday 05 April 2026 03:17:21 +0000 (0:00:00.086) 0:01:16.155 ********** 2026-04-05 03:18:28.448763 | orchestrator | 2026-04-05 03:18:28.448774 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-05 03:18:28.448785 | orchestrator | Sunday 05 April 2026 03:17:21 +0000 (0:00:00.077) 0:01:16.233 ********** 2026-04-05 03:18:28.448796 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:18:28.448807 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:18:28.448818 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:18:28.448829 | orchestrator | 2026-04-05 03:18:28.448840 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-05 03:18:28.448850 | orchestrator | Sunday 05 April 2026 03:18:10 +0000 (0:00:48.807) 0:02:05.040 ********** 2026-04-05 03:18:28.448861 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:18:28.448872 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:18:28.448883 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:18:28.448893 | orchestrator | 2026-04-05 03:18:28.448904 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-05 03:18:28.448915 | orchestrator | Sunday 05 April 2026 03:18:15 +0000 (0:00:05.291) 0:02:10.332 ********** 2026-04-05 03:18:28.448926 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:18:28.448937 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:18:28.448947 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:18:28.448958 | orchestrator | 2026-04-05 03:18:28.448969 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 03:18:28.448980 | orchestrator | Sunday 05 April 2026 03:18:27 +0000 (0:00:12.444) 0:02:22.776 ********** 2026-04-05 03:18:28.449003 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:19:22.249807 | orchestrator | 2026-04-05 03:19:22.249887 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-05 03:19:22.249895 | orchestrator | Sunday 05 April 2026 03:18:28 +0000 (0:00:00.599) 0:02:23.376 ********** 2026-04-05 03:19:22.249899 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:19:22.249905 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:19:22.249909 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:19:22.249913 | orchestrator | 2026-04-05 03:19:22.249917 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-05 03:19:22.249921 | orchestrator | Sunday 05 April 2026 03:18:29 +0000 (0:00:01.233) 0:02:24.609 ********** 2026-04-05 03:19:22.249925 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:19:22.249930 | orchestrator | 2026-04-05 03:19:22.249934 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-05 03:19:22.249938 | orchestrator | Sunday 05 April 2026 03:18:31 +0000 (0:00:01.890) 0:02:26.499 ********** 2026-04-05 03:19:22.249942 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-05 03:19:22.249946 | orchestrator | 2026-04-05 03:19:22.249949 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-05 03:19:22.249953 | orchestrator | Sunday 05 April 2026 03:18:44 +0000 (0:00:12.884) 0:02:39.384 ********** 2026-04-05 03:19:22.249957 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-05 03:19:22.249961 | orchestrator | 2026-04-05 03:19:22.249965 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-05 03:19:22.249968 | orchestrator | Sunday 05 April 2026 03:19:09 +0000 (0:00:25.434) 0:03:04.819 ********** 2026-04-05 03:19:22.249972 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-05 03:19:22.249992 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-05 03:19:22.249996 | orchestrator | 2026-04-05 03:19:22.250000 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-05 03:19:22.250004 | orchestrator | Sunday 05 April 2026 03:19:16 +0000 (0:00:07.035) 0:03:11.854 ********** 2026-04-05 03:19:22.250008 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:19:22.250011 | orchestrator | 2026-04-05 03:19:22.250050 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-05 03:19:22.250054 | orchestrator | Sunday 05 April 2026 03:19:17 +0000 (0:00:00.133) 0:03:11.988 ********** 2026-04-05 03:19:22.250058 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:19:22.250062 | orchestrator | 2026-04-05 03:19:22.250066 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-05 03:19:22.250069 | orchestrator | Sunday 05 April 2026 03:19:17 +0000 (0:00:00.136) 0:03:12.124 ********** 2026-04-05 03:19:22.250073 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:19:22.250077 | orchestrator | 2026-04-05 03:19:22.250090 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-05 03:19:22.250094 | orchestrator | Sunday 05 April 2026 03:19:17 +0000 (0:00:00.140) 0:03:12.264 ********** 2026-04-05 03:19:22.250098 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:19:22.250101 | orchestrator | 2026-04-05 03:19:22.250105 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-05 03:19:22.250109 | orchestrator | Sunday 05 April 2026 03:19:17 +0000 (0:00:00.585) 0:03:12.850 ********** 2026-04-05 03:19:22.250113 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:19:22.250116 | orchestrator | 2026-04-05 03:19:22.250120 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 03:19:22.250124 | orchestrator | Sunday 05 April 2026 03:19:21 +0000 (0:00:03.439) 0:03:16.290 ********** 2026-04-05 03:19:22.250128 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:19:22.250131 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:19:22.250135 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:19:22.250139 | orchestrator | 2026-04-05 03:19:22.250169 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:19:22.250175 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 03:19:22.250181 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 03:19:22.250185 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 03:19:22.250188 | orchestrator | 2026-04-05 03:19:22.250192 | orchestrator | 2026-04-05 03:19:22.250196 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:19:22.250200 | orchestrator | Sunday 05 April 2026 03:19:21 +0000 (0:00:00.469) 0:03:16.759 ********** 2026-04-05 03:19:22.250204 | orchestrator | =============================================================================== 2026-04-05 03:19:22.250207 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 48.81s 2026-04-05 03:19:22.250211 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.43s 2026-04-05 03:19:22.250215 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.97s 2026-04-05 03:19:22.250219 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.88s 2026-04-05 03:19:22.250223 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.44s 2026-04-05 03:19:22.250226 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.58s 2026-04-05 03:19:22.250230 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.05s 2026-04-05 03:19:22.250234 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.04s 2026-04-05 03:19:22.250242 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.29s 2026-04-05 03:19:22.250255 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.08s 2026-04-05 03:19:22.250260 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.53s 2026-04-05 03:19:22.250263 | orchestrator | keystone : Creating default user role ----------------------------------- 3.44s 2026-04-05 03:19:22.250267 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.41s 2026-04-05 03:19:22.250271 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.72s 2026-04-05 03:19:22.250275 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.34s 2026-04-05 03:19:22.250278 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.33s 2026-04-05 03:19:22.250282 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.28s 2026-04-05 03:19:22.250286 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.89s 2026-04-05 03:19:22.250289 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.87s 2026-04-05 03:19:22.250293 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.70s 2026-04-05 03:19:24.732758 | orchestrator | 2026-04-05 03:19:24 | INFO  | Task 105c6361-606a-4b32-ba45-b69ad390de32 (placement) was prepared for execution. 2026-04-05 03:19:24.732887 | orchestrator | 2026-04-05 03:19:24 | INFO  | It takes a moment until task 105c6361-606a-4b32-ba45-b69ad390de32 (placement) has been started and output is visible here. 2026-04-05 03:20:01.483493 | orchestrator | 2026-04-05 03:20:01.483601 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:20:01.483619 | orchestrator | 2026-04-05 03:20:01.483632 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:20:01.483644 | orchestrator | Sunday 05 April 2026 03:19:29 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-04-05 03:20:01.483655 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:20:01.483668 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:20:01.483680 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:20:01.483691 | orchestrator | 2026-04-05 03:20:01.483702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:20:01.483714 | orchestrator | Sunday 05 April 2026 03:19:29 +0000 (0:00:00.373) 0:00:00.658 ********** 2026-04-05 03:20:01.483726 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-05 03:20:01.483737 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-05 03:20:01.483748 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-05 03:20:01.483759 | orchestrator | 2026-04-05 03:20:01.483786 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-05 03:20:01.483798 | orchestrator | 2026-04-05 03:20:01.483809 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 03:20:01.483820 | orchestrator | Sunday 05 April 2026 03:19:30 +0000 (0:00:00.472) 0:00:01.130 ********** 2026-04-05 03:20:01.483833 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:20:01.483845 | orchestrator | 2026-04-05 03:20:01.483856 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-05 03:20:01.483867 | orchestrator | Sunday 05 April 2026 03:19:30 +0000 (0:00:00.611) 0:00:01.742 ********** 2026-04-05 03:20:01.483878 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-05 03:20:01.483889 | orchestrator | 2026-04-05 03:20:01.483900 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-05 03:20:01.483911 | orchestrator | Sunday 05 April 2026 03:19:34 +0000 (0:00:04.178) 0:00:05.921 ********** 2026-04-05 03:20:01.483922 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-05 03:20:01.483956 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-05 03:20:01.483968 | orchestrator | 2026-04-05 03:20:01.483979 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-05 03:20:01.483990 | orchestrator | Sunday 05 April 2026 03:19:41 +0000 (0:00:06.849) 0:00:12.771 ********** 2026-04-05 03:20:01.484001 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-05 03:20:01.484015 | orchestrator | 2026-04-05 03:20:01.484028 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-05 03:20:01.484041 | orchestrator | Sunday 05 April 2026 03:19:45 +0000 (0:00:03.859) 0:00:16.630 ********** 2026-04-05 03:20:01.484055 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:20:01.484068 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-05 03:20:01.484081 | orchestrator | 2026-04-05 03:20:01.484094 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-05 03:20:01.484107 | orchestrator | Sunday 05 April 2026 03:19:49 +0000 (0:00:04.454) 0:00:21.085 ********** 2026-04-05 03:20:01.484125 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:20:01.484173 | orchestrator | 2026-04-05 03:20:01.484204 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-05 03:20:01.484224 | orchestrator | Sunday 05 April 2026 03:19:53 +0000 (0:00:03.343) 0:00:24.428 ********** 2026-04-05 03:20:01.484243 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-05 03:20:01.484261 | orchestrator | 2026-04-05 03:20:01.484279 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 03:20:01.484299 | orchestrator | Sunday 05 April 2026 03:19:57 +0000 (0:00:03.788) 0:00:28.216 ********** 2026-04-05 03:20:01.484318 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:20:01.484337 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:20:01.484357 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:20:01.484375 | orchestrator | 2026-04-05 03:20:01.484392 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-05 03:20:01.484410 | orchestrator | Sunday 05 April 2026 03:19:57 +0000 (0:00:00.345) 0:00:28.562 ********** 2026-04-05 03:20:01.484433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:01.484495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:01.484534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:01.484552 | orchestrator | 2026-04-05 03:20:01.484571 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-05 03:20:01.484590 | orchestrator | Sunday 05 April 2026 03:19:58 +0000 (0:00:00.998) 0:00:29.561 ********** 2026-04-05 03:20:01.484609 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:20:01.484627 | orchestrator | 2026-04-05 03:20:01.484645 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-05 03:20:01.484665 | orchestrator | Sunday 05 April 2026 03:19:58 +0000 (0:00:00.363) 0:00:29.925 ********** 2026-04-05 03:20:01.484685 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:20:01.484703 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:20:01.484721 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:20:01.484740 | orchestrator | 2026-04-05 03:20:01.484759 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 03:20:01.484777 | orchestrator | Sunday 05 April 2026 03:19:59 +0000 (0:00:00.327) 0:00:30.252 ********** 2026-04-05 03:20:01.484796 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:20:01.484815 | orchestrator | 2026-04-05 03:20:01.484834 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-05 03:20:01.484852 | orchestrator | Sunday 05 April 2026 03:19:59 +0000 (0:00:00.576) 0:00:30.829 ********** 2026-04-05 03:20:01.484870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:01.484907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:04.654874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:04.654970 | orchestrator | 2026-04-05 03:20:04.654996 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-05 03:20:04.655009 | orchestrator | Sunday 05 April 2026 03:20:01 +0000 (0:00:01.738) 0:00:32.567 ********** 2026-04-05 03:20:04.655022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:04.655033 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:20:04.655044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:04.655055 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:20:04.655065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:04.655094 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:20:04.655104 | orchestrator | 2026-04-05 03:20:04.655115 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-05 03:20:04.655168 | orchestrator | Sunday 05 April 2026 03:20:02 +0000 (0:00:00.603) 0:00:33.171 ********** 2026-04-05 03:20:04.655192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:04.655209 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:20:04.655224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:04.655241 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:20:04.655251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:04.655261 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:20:04.655271 | orchestrator | 2026-04-05 03:20:04.655281 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-05 03:20:04.655291 | orchestrator | Sunday 05 April 2026 03:20:02 +0000 (0:00:00.833) 0:00:34.004 ********** 2026-04-05 03:20:04.655300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:04.655341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:12.121247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:12.121358 | orchestrator | 2026-04-05 03:20:12.121377 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-05 03:20:12.121391 | orchestrator | Sunday 05 April 2026 03:20:04 +0000 (0:00:01.738) 0:00:35.743 ********** 2026-04-05 03:20:12.121404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:12.121417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:12.121468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:12.121482 | orchestrator | 2026-04-05 03:20:12.121493 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-05 03:20:12.121504 | orchestrator | Sunday 05 April 2026 03:20:07 +0000 (0:00:02.480) 0:00:38.223 ********** 2026-04-05 03:20:12.121534 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-05 03:20:12.121547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-05 03:20:12.121558 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-05 03:20:12.121569 | orchestrator | 2026-04-05 03:20:12.121580 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-05 03:20:12.121591 | orchestrator | Sunday 05 April 2026 03:20:08 +0000 (0:00:01.575) 0:00:39.798 ********** 2026-04-05 03:20:12.121602 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:20:12.121614 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:20:12.121625 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:20:12.121636 | orchestrator | 2026-04-05 03:20:12.121652 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-05 03:20:12.121670 | orchestrator | Sunday 05 April 2026 03:20:10 +0000 (0:00:01.485) 0:00:41.284 ********** 2026-04-05 03:20:12.121689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:12.121708 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:20:12.121728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:12.121757 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:20:12.121777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-05 03:20:12.121796 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:20:12.121814 | orchestrator | 2026-04-05 03:20:12.121834 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-05 03:20:12.121865 | orchestrator | Sunday 05 April 2026 03:20:10 +0000 (0:00:00.796) 0:00:42.080 ********** 2026-04-05 03:20:12.121899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:42.032441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:42.032595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-05 03:20:42.032623 | orchestrator | 2026-04-05 03:20:42.032640 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-05 03:20:42.032652 | orchestrator | Sunday 05 April 2026 03:20:12 +0000 (0:00:01.133) 0:00:43.214 ********** 2026-04-05 03:20:42.032662 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:20:42.032672 | orchestrator | 2026-04-05 03:20:42.032680 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-05 03:20:42.032689 | orchestrator | Sunday 05 April 2026 03:20:14 +0000 (0:00:02.320) 0:00:45.534 ********** 2026-04-05 03:20:42.032698 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:20:42.032707 | orchestrator | 2026-04-05 03:20:42.032716 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-05 03:20:42.032725 | orchestrator | Sunday 05 April 2026 03:20:16 +0000 (0:00:02.282) 0:00:47.817 ********** 2026-04-05 03:20:42.032733 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:20:42.032742 | orchestrator | 2026-04-05 03:20:42.032750 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 03:20:42.032759 | orchestrator | Sunday 05 April 2026 03:20:31 +0000 (0:00:14.474) 0:01:02.292 ********** 2026-04-05 03:20:42.032768 | orchestrator | 2026-04-05 03:20:42.032776 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 03:20:42.032785 | orchestrator | Sunday 05 April 2026 03:20:31 +0000 (0:00:00.074) 0:01:02.366 ********** 2026-04-05 03:20:42.032794 | orchestrator | 2026-04-05 03:20:42.032802 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 03:20:42.032811 | orchestrator | Sunday 05 April 2026 03:20:31 +0000 (0:00:00.070) 0:01:02.436 ********** 2026-04-05 03:20:42.032819 | orchestrator | 2026-04-05 03:20:42.032839 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-05 03:20:42.032848 | orchestrator | Sunday 05 April 2026 03:20:31 +0000 (0:00:00.074) 0:01:02.511 ********** 2026-04-05 03:20:42.032857 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:20:42.032880 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:20:42.032889 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:20:42.032898 | orchestrator | 2026-04-05 03:20:42.032906 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:20:42.032916 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 03:20:42.032925 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 03:20:42.032934 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 03:20:42.032944 | orchestrator | 2026-04-05 03:20:42.032955 | orchestrator | 2026-04-05 03:20:42.032965 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:20:42.032976 | orchestrator | Sunday 05 April 2026 03:20:41 +0000 (0:00:10.225) 0:01:12.737 ********** 2026-04-05 03:20:42.032994 | orchestrator | =============================================================================== 2026-04-05 03:20:42.033004 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.47s 2026-04-05 03:20:42.033033 | orchestrator | placement : Restart placement-api container ---------------------------- 10.23s 2026-04-05 03:20:42.033083 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.85s 2026-04-05 03:20:42.033096 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.45s 2026-04-05 03:20:42.033106 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.18s 2026-04-05 03:20:42.033116 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.86s 2026-04-05 03:20:42.033127 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.79s 2026-04-05 03:20:42.033137 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.34s 2026-04-05 03:20:42.033149 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.48s 2026-04-05 03:20:42.033159 | orchestrator | placement : Creating placement databases -------------------------------- 2.32s 2026-04-05 03:20:42.033170 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.28s 2026-04-05 03:20:42.033180 | orchestrator | placement : Copying over config.json files for services ----------------- 1.74s 2026-04-05 03:20:42.033191 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.74s 2026-04-05 03:20:42.033199 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.58s 2026-04-05 03:20:42.033208 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.49s 2026-04-05 03:20:42.033216 | orchestrator | placement : Check placement containers ---------------------------------- 1.13s 2026-04-05 03:20:42.033225 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.00s 2026-04-05 03:20:42.033234 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.83s 2026-04-05 03:20:42.033242 | orchestrator | placement : Copying over existing policy file --------------------------- 0.80s 2026-04-05 03:20:42.033251 | orchestrator | placement : include_tasks ----------------------------------------------- 0.61s 2026-04-05 03:20:44.509320 | orchestrator | 2026-04-05 03:20:44 | INFO  | Task 268f46c9-7fb2-495f-a546-f42fe1f5d9e2 (neutron) was prepared for execution. 2026-04-05 03:20:44.509401 | orchestrator | 2026-04-05 03:20:44 | INFO  | It takes a moment until task 268f46c9-7fb2-495f-a546-f42fe1f5d9e2 (neutron) has been started and output is visible here. 2026-04-05 03:21:35.305766 | orchestrator | 2026-04-05 03:21:35.305952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:21:35.305977 | orchestrator | 2026-04-05 03:21:35.305989 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:21:35.306001 | orchestrator | Sunday 05 April 2026 03:20:48 +0000 (0:00:00.292) 0:00:00.292 ********** 2026-04-05 03:21:35.306013 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:21:35.306104 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:21:35.306117 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:21:35.306128 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:21:35.306139 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:21:35.306149 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:21:35.306160 | orchestrator | 2026-04-05 03:21:35.306171 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:21:35.306183 | orchestrator | Sunday 05 April 2026 03:20:49 +0000 (0:00:00.771) 0:00:01.063 ********** 2026-04-05 03:21:35.306194 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-05 03:21:35.306205 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-05 03:21:35.306216 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-05 03:21:35.306227 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-05 03:21:35.306238 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-05 03:21:35.306274 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-05 03:21:35.306288 | orchestrator | 2026-04-05 03:21:35.306301 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-05 03:21:35.306313 | orchestrator | 2026-04-05 03:21:35.306325 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 03:21:35.306339 | orchestrator | Sunday 05 April 2026 03:20:50 +0000 (0:00:00.687) 0:00:01.750 ********** 2026-04-05 03:21:35.306366 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:21:35.306380 | orchestrator | 2026-04-05 03:21:35.306393 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-05 03:21:35.306406 | orchestrator | Sunday 05 April 2026 03:20:51 +0000 (0:00:01.288) 0:00:03.039 ********** 2026-04-05 03:21:35.306419 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:21:35.306431 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:21:35.306444 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:21:35.306457 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:21:35.306470 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:21:35.306483 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:21:35.306496 | orchestrator | 2026-04-05 03:21:35.306508 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-05 03:21:35.306529 | orchestrator | Sunday 05 April 2026 03:20:53 +0000 (0:00:01.445) 0:00:04.485 ********** 2026-04-05 03:21:35.306559 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:21:35.306578 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:21:35.306596 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:21:35.306614 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:21:35.306630 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:21:35.306648 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:21:35.306666 | orchestrator | 2026-04-05 03:21:35.306686 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-05 03:21:35.306706 | orchestrator | Sunday 05 April 2026 03:20:54 +0000 (0:00:01.214) 0:00:05.699 ********** 2026-04-05 03:21:35.306726 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 03:21:35.306745 | orchestrator |  "changed": false, 2026-04-05 03:21:35.306760 | orchestrator |  "msg": "All assertions passed" 2026-04-05 03:21:35.306772 | orchestrator | } 2026-04-05 03:21:35.306783 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 03:21:35.306794 | orchestrator |  "changed": false, 2026-04-05 03:21:35.306805 | orchestrator |  "msg": "All assertions passed" 2026-04-05 03:21:35.306816 | orchestrator | } 2026-04-05 03:21:35.306826 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 03:21:35.306837 | orchestrator |  "changed": false, 2026-04-05 03:21:35.306848 | orchestrator |  "msg": "All assertions passed" 2026-04-05 03:21:35.306859 | orchestrator | } 2026-04-05 03:21:35.306869 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 03:21:35.306880 | orchestrator |  "changed": false, 2026-04-05 03:21:35.306891 | orchestrator |  "msg": "All assertions passed" 2026-04-05 03:21:35.306955 | orchestrator | } 2026-04-05 03:21:35.306969 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 03:21:35.306980 | orchestrator |  "changed": false, 2026-04-05 03:21:35.306991 | orchestrator |  "msg": "All assertions passed" 2026-04-05 03:21:35.307002 | orchestrator | } 2026-04-05 03:21:35.307013 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 03:21:35.307024 | orchestrator |  "changed": false, 2026-04-05 03:21:35.307035 | orchestrator |  "msg": "All assertions passed" 2026-04-05 03:21:35.307046 | orchestrator | } 2026-04-05 03:21:35.307056 | orchestrator | 2026-04-05 03:21:35.307067 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-05 03:21:35.307078 | orchestrator | Sunday 05 April 2026 03:20:55 +0000 (0:00:00.906) 0:00:06.606 ********** 2026-04-05 03:21:35.307089 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:35.307100 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:35.307111 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:35.307133 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:35.307144 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:35.307155 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:35.307166 | orchestrator | 2026-04-05 03:21:35.307177 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-05 03:21:35.307187 | orchestrator | Sunday 05 April 2026 03:20:55 +0000 (0:00:00.684) 0:00:07.290 ********** 2026-04-05 03:21:35.307198 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-05 03:21:35.307209 | orchestrator | 2026-04-05 03:21:35.307220 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-05 03:21:35.307231 | orchestrator | Sunday 05 April 2026 03:20:59 +0000 (0:00:03.956) 0:00:11.247 ********** 2026-04-05 03:21:35.307242 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-05 03:21:35.307254 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-05 03:21:35.307265 | orchestrator | 2026-04-05 03:21:35.307298 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-05 03:21:35.307310 | orchestrator | Sunday 05 April 2026 03:21:06 +0000 (0:00:06.746) 0:00:17.994 ********** 2026-04-05 03:21:35.307427 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:21:35.307449 | orchestrator | 2026-04-05 03:21:35.307468 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-05 03:21:35.307488 | orchestrator | Sunday 05 April 2026 03:21:09 +0000 (0:00:03.215) 0:00:21.210 ********** 2026-04-05 03:21:35.307506 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:21:35.307524 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-05 03:21:35.307536 | orchestrator | 2026-04-05 03:21:35.307547 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-05 03:21:35.307557 | orchestrator | Sunday 05 April 2026 03:21:13 +0000 (0:00:04.095) 0:00:25.306 ********** 2026-04-05 03:21:35.307571 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:21:35.307588 | orchestrator | 2026-04-05 03:21:35.307605 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-05 03:21:35.307620 | orchestrator | Sunday 05 April 2026 03:21:17 +0000 (0:00:03.355) 0:00:28.662 ********** 2026-04-05 03:21:35.307638 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-05 03:21:35.307654 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-05 03:21:35.307672 | orchestrator | 2026-04-05 03:21:35.307689 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 03:21:35.307708 | orchestrator | Sunday 05 April 2026 03:21:25 +0000 (0:00:08.339) 0:00:37.002 ********** 2026-04-05 03:21:35.307728 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:35.307747 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:35.307765 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:35.307784 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:35.307801 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:35.307833 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:35.307853 | orchestrator | 2026-04-05 03:21:35.307871 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-05 03:21:35.307891 | orchestrator | Sunday 05 April 2026 03:21:26 +0000 (0:00:00.840) 0:00:37.842 ********** 2026-04-05 03:21:35.307936 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:35.307951 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:35.307961 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:35.307972 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:35.307983 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:35.307994 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:35.308004 | orchestrator | 2026-04-05 03:21:35.308015 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-05 03:21:35.308026 | orchestrator | Sunday 05 April 2026 03:21:28 +0000 (0:00:02.358) 0:00:40.201 ********** 2026-04-05 03:21:35.308048 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:21:35.308059 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:21:35.308070 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:21:35.308081 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:21:35.308091 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:21:35.308102 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:21:35.308113 | orchestrator | 2026-04-05 03:21:35.308123 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 03:21:35.308134 | orchestrator | Sunday 05 April 2026 03:21:30 +0000 (0:00:01.238) 0:00:41.440 ********** 2026-04-05 03:21:35.308145 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:35.308155 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:35.308166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:35.308176 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:35.308187 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:35.308197 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:35.308208 | orchestrator | 2026-04-05 03:21:35.308219 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-05 03:21:35.308230 | orchestrator | Sunday 05 April 2026 03:21:32 +0000 (0:00:02.592) 0:00:44.032 ********** 2026-04-05 03:21:35.308245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:21:35.308277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:21:41.056436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:21:41.056587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:21:41.056602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:21:41.056609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:21:41.056616 | orchestrator | 2026-04-05 03:21:41.056624 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-05 03:21:41.056633 | orchestrator | Sunday 05 April 2026 03:21:35 +0000 (0:00:02.592) 0:00:46.625 ********** 2026-04-05 03:21:41.056639 | orchestrator | [WARNING]: Skipped 2026-04-05 03:21:41.056647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-05 03:21:41.056654 | orchestrator | due to this access issue: 2026-04-05 03:21:41.056661 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-05 03:21:41.056668 | orchestrator | a directory 2026-04-05 03:21:41.056674 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:21:41.056680 | orchestrator | 2026-04-05 03:21:41.056687 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 03:21:41.056693 | orchestrator | Sunday 05 April 2026 03:21:36 +0000 (0:00:00.862) 0:00:47.487 ********** 2026-04-05 03:21:41.056700 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:21:41.056708 | orchestrator | 2026-04-05 03:21:41.056714 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-05 03:21:41.056733 | orchestrator | Sunday 05 April 2026 03:21:37 +0000 (0:00:01.357) 0:00:48.844 ********** 2026-04-05 03:21:41.056744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:21:41.056758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:21:41.056765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:21:41.056772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:21:41.056785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:21:46.336583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:21:46.336689 | orchestrator | 2026-04-05 03:21:46.336705 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-05 03:21:46.336717 | orchestrator | Sunday 05 April 2026 03:21:41 +0000 (0:00:03.527) 0:00:52.372 ********** 2026-04-05 03:21:46.336729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:46.336741 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:46.336752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:46.336763 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:46.336773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:46.336783 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:46.336832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:46.336844 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:46.336861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:46.336871 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:46.337019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:46.337038 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:46.337054 | orchestrator | 2026-04-05 03:21:46.337072 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-05 03:21:46.337089 | orchestrator | Sunday 05 April 2026 03:21:43 +0000 (0:00:02.170) 0:00:54.543 ********** 2026-04-05 03:21:46.337108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:46.337120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:46.337132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:46.337153 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:46.337182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:52.091098 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:52.091221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:52.091242 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:52.091263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:52.091343 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:52.091368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:52.091416 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:52.091429 | orchestrator | 2026-04-05 03:21:52.091441 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-05 03:21:52.091454 | orchestrator | Sunday 05 April 2026 03:21:46 +0000 (0:00:03.106) 0:00:57.649 ********** 2026-04-05 03:21:52.091465 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:52.091482 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:52.091501 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:52.091519 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:52.091536 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:52.091554 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:52.091572 | orchestrator | 2026-04-05 03:21:52.091592 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-05 03:21:52.091611 | orchestrator | Sunday 05 April 2026 03:21:48 +0000 (0:00:02.446) 0:01:00.096 ********** 2026-04-05 03:21:52.091631 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:52.091649 | orchestrator | 2026-04-05 03:21:52.091667 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-05 03:21:52.091682 | orchestrator | Sunday 05 April 2026 03:21:48 +0000 (0:00:00.150) 0:01:00.247 ********** 2026-04-05 03:21:52.091701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:52.091720 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:52.091739 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:52.091758 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:52.091776 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:52.091795 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:21:52.091808 | orchestrator | 2026-04-05 03:21:52.091828 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-05 03:21:52.091847 | orchestrator | Sunday 05 April 2026 03:21:49 +0000 (0:00:00.663) 0:01:00.910 ********** 2026-04-05 03:21:52.091940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:52.091963 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:21:52.091975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:52.091987 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:21:52.092012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:21:52.092024 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:21:52.092035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:52.092047 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:21:52.092064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:21:52.092076 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:21:52.092097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:00.719720 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:00.719827 | orchestrator | 2026-04-05 03:22:00.719969 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-05 03:22:00.719986 | orchestrator | Sunday 05 April 2026 03:21:52 +0000 (0:00:02.486) 0:01:03.397 ********** 2026-04-05 03:22:00.720001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:00.720044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:00.720056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:22:00.720082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:00.720113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:22:00.720163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:22:00.720176 | orchestrator | 2026-04-05 03:22:00.720186 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-05 03:22:00.720196 | orchestrator | Sunday 05 April 2026 03:21:55 +0000 (0:00:03.232) 0:01:06.629 ********** 2026-04-05 03:22:00.720207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:00.720218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:00.720235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:22:00.720257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:10.119873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:22:10.119992 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:22:10.120010 | orchestrator | 2026-04-05 03:22:10.120024 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-05 03:22:10.120037 | orchestrator | Sunday 05 April 2026 03:22:00 +0000 (0:00:05.401) 0:01:12.031 ********** 2026-04-05 03:22:10.120049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:10.120080 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:10.120093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:10.120124 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:10.120158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:10.120178 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:10.120194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:10.120215 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:10.120234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:10.120253 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:10.120280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:10.120300 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:10.120318 | orchestrator | 2026-04-05 03:22:10.120337 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-05 03:22:10.120371 | orchestrator | Sunday 05 April 2026 03:22:02 +0000 (0:00:02.214) 0:01:14.246 ********** 2026-04-05 03:22:10.120391 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:10.120410 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:10.120426 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:10.120440 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:22:10.120452 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:22:10.120465 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:22:10.120478 | orchestrator | 2026-04-05 03:22:10.120490 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-05 03:22:10.120504 | orchestrator | Sunday 05 April 2026 03:22:06 +0000 (0:00:03.317) 0:01:17.563 ********** 2026-04-05 03:22:10.120528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:29.102670 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.102867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:29.102894 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.102908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:29.102920 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.102933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:29.102988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:29.103022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:22:29.103034 | orchestrator | 2026-04-05 03:22:29.103047 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-05 03:22:29.103060 | orchestrator | Sunday 05 April 2026 03:22:10 +0000 (0:00:03.879) 0:01:21.443 ********** 2026-04-05 03:22:29.103071 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:29.103081 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:29.103092 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:29.103103 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.103113 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.103124 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.103134 | orchestrator | 2026-04-05 03:22:29.103145 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-05 03:22:29.103156 | orchestrator | Sunday 05 April 2026 03:22:12 +0000 (0:00:02.188) 0:01:23.631 ********** 2026-04-05 03:22:29.103167 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:29.103178 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:29.103188 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:29.103199 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.103212 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.103224 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.103236 | orchestrator | 2026-04-05 03:22:29.103248 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-05 03:22:29.103261 | orchestrator | Sunday 05 April 2026 03:22:14 +0000 (0:00:02.326) 0:01:25.958 ********** 2026-04-05 03:22:29.103274 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:29.103287 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:29.103300 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:29.103313 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.103325 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.103339 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.103352 | orchestrator | 2026-04-05 03:22:29.103364 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-05 03:22:29.103390 | orchestrator | Sunday 05 April 2026 03:22:17 +0000 (0:00:02.407) 0:01:28.366 ********** 2026-04-05 03:22:29.103417 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:29.103443 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:29.103463 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:29.103480 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.103499 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.103517 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.103536 | orchestrator | 2026-04-05 03:22:29.103556 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-05 03:22:29.103574 | orchestrator | Sunday 05 April 2026 03:22:19 +0000 (0:00:02.221) 0:01:30.587 ********** 2026-04-05 03:22:29.103593 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:29.103612 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:29.103631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:29.103649 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.103665 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.103677 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.103687 | orchestrator | 2026-04-05 03:22:29.103698 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-05 03:22:29.103709 | orchestrator | Sunday 05 April 2026 03:22:21 +0000 (0:00:02.435) 0:01:33.023 ********** 2026-04-05 03:22:29.103719 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:29.103730 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:29.103741 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:29.103751 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.103770 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.103812 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.103823 | orchestrator | 2026-04-05 03:22:29.103834 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-05 03:22:29.103845 | orchestrator | Sunday 05 April 2026 03:22:24 +0000 (0:00:02.567) 0:01:35.590 ********** 2026-04-05 03:22:29.103855 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 03:22:29.103867 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:29.103878 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 03:22:29.103888 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:29.103899 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 03:22:29.103910 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:29.103920 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 03:22:29.103931 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:29.103945 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 03:22:29.103964 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:29.103981 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 03:22:29.103999 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:29.104016 | orchestrator | 2026-04-05 03:22:29.104069 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-05 03:22:29.104091 | orchestrator | Sunday 05 April 2026 03:22:26 +0000 (0:00:02.453) 0:01:38.044 ********** 2026-04-05 03:22:29.104127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:31.527753 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:31.527942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:31.527957 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:31.527965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:31.527972 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:31.528000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:31.528008 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:31.528014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:31.528081 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:31.528110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:31.528119 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:31.528126 | orchestrator | 2026-04-05 03:22:31.528134 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-05 03:22:31.528144 | orchestrator | Sunday 05 April 2026 03:22:29 +0000 (0:00:02.379) 0:01:40.423 ********** 2026-04-05 03:22:31.528151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:31.528159 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:31.528172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:31.528179 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:31.528186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:31.528201 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:31.528208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:31.528215 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:31.528228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:59.892101 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:22:59.892215 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892219 | orchestrator | 2026-04-05 03:22:59.892224 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-05 03:22:59.892230 | orchestrator | Sunday 05 April 2026 03:22:31 +0000 (0:00:02.414) 0:01:42.837 ********** 2026-04-05 03:22:59.892234 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892238 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892242 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892245 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892251 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892285 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892290 | orchestrator | 2026-04-05 03:22:59.892306 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-05 03:22:59.892310 | orchestrator | Sunday 05 April 2026 03:22:33 +0000 (0:00:02.205) 0:01:45.043 ********** 2026-04-05 03:22:59.892314 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892318 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892322 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892326 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:22:59.892330 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:22:59.892333 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:22:59.892337 | orchestrator | 2026-04-05 03:22:59.892341 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-05 03:22:59.892363 | orchestrator | Sunday 05 April 2026 03:22:37 +0000 (0:00:04.005) 0:01:49.049 ********** 2026-04-05 03:22:59.892369 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892375 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892381 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892387 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892393 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892399 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892404 | orchestrator | 2026-04-05 03:22:59.892410 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-05 03:22:59.892416 | orchestrator | Sunday 05 April 2026 03:22:40 +0000 (0:00:02.486) 0:01:51.535 ********** 2026-04-05 03:22:59.892422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892427 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892433 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892438 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892443 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892448 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892453 | orchestrator | 2026-04-05 03:22:59.892459 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-05 03:22:59.892464 | orchestrator | Sunday 05 April 2026 03:22:42 +0000 (0:00:02.374) 0:01:53.910 ********** 2026-04-05 03:22:59.892469 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892474 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892479 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892484 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892490 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892496 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892502 | orchestrator | 2026-04-05 03:22:59.892508 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-05 03:22:59.892514 | orchestrator | Sunday 05 April 2026 03:22:45 +0000 (0:00:02.809) 0:01:56.720 ********** 2026-04-05 03:22:59.892522 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892527 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892533 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892538 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892543 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892548 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892553 | orchestrator | 2026-04-05 03:22:59.892559 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-05 03:22:59.892565 | orchestrator | Sunday 05 April 2026 03:22:47 +0000 (0:00:02.357) 0:01:59.078 ********** 2026-04-05 03:22:59.892570 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892575 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892582 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892588 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892594 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892600 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892606 | orchestrator | 2026-04-05 03:22:59.892612 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-05 03:22:59.892618 | orchestrator | Sunday 05 April 2026 03:22:50 +0000 (0:00:02.400) 0:02:01.479 ********** 2026-04-05 03:22:59.892625 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892634 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892638 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892642 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892645 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892649 | orchestrator | 2026-04-05 03:22:59.892653 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-05 03:22:59.892670 | orchestrator | Sunday 05 April 2026 03:22:52 +0000 (0:00:02.350) 0:02:03.830 ********** 2026-04-05 03:22:59.892674 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892684 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892688 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892692 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892695 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892699 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892738 | orchestrator | 2026-04-05 03:22:59.892744 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-05 03:22:59.892748 | orchestrator | Sunday 05 April 2026 03:22:55 +0000 (0:00:02.683) 0:02:06.513 ********** 2026-04-05 03:22:59.892752 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 03:22:59.892757 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892761 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 03:22:59.892765 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892768 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 03:22:59.892772 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:22:59.892776 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 03:22:59.892780 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:22:59.892783 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 03:22:59.892787 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:22:59.892791 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 03:22:59.892799 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:22:59.892803 | orchestrator | 2026-04-05 03:22:59.892807 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-05 03:22:59.892810 | orchestrator | Sunday 05 April 2026 03:22:57 +0000 (0:00:02.226) 0:02:08.739 ********** 2026-04-05 03:22:59.892816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:59.892822 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:22:59.892826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:22:59.892830 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:22:59.892846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-05 03:23:05.790881 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:23:05.791018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:23:05.791048 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:23:05.791101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:23:05.791123 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:23:05.791141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 03:23:05.791160 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:23:05.791177 | orchestrator | 2026-04-05 03:23:05.791195 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-05 03:23:05.791213 | orchestrator | Sunday 05 April 2026 03:22:59 +0000 (0:00:02.465) 0:02:11.205 ********** 2026-04-05 03:23:05.791232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:23:05.791309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:23:05.791340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-05 03:23:05.791357 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:23:05.791369 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:23:05.791389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 03:23:05.791402 | orchestrator | 2026-04-05 03:23:05.791413 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 03:23:05.791424 | orchestrator | Sunday 05 April 2026 03:23:02 +0000 (0:00:02.721) 0:02:13.926 ********** 2026-04-05 03:23:05.791437 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:23:05.791449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:23:05.791460 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:23:05.791472 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:23:05.791484 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:23:05.791496 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:23:05.791507 | orchestrator | 2026-04-05 03:23:05.791517 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-05 03:23:05.791526 | orchestrator | Sunday 05 April 2026 03:23:03 +0000 (0:00:00.815) 0:02:14.742 ********** 2026-04-05 03:23:05.791543 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:25:29.794618 | orchestrator | 2026-04-05 03:25:29.794718 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-05 03:25:29.794731 | orchestrator | Sunday 05 April 2026 03:23:05 +0000 (0:00:02.367) 0:02:17.109 ********** 2026-04-05 03:25:29.794740 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:25:29.794750 | orchestrator | 2026-04-05 03:25:29.794759 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-05 03:25:29.794767 | orchestrator | Sunday 05 April 2026 03:23:08 +0000 (0:00:02.383) 0:02:19.492 ********** 2026-04-05 03:25:29.794776 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:25:29.794784 | orchestrator | 2026-04-05 03:25:29.794792 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 03:25:29.794801 | orchestrator | Sunday 05 April 2026 03:23:52 +0000 (0:00:44.129) 0:03:03.621 ********** 2026-04-05 03:25:29.794809 | orchestrator | 2026-04-05 03:25:29.794818 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 03:25:29.794826 | orchestrator | Sunday 05 April 2026 03:23:52 +0000 (0:00:00.075) 0:03:03.696 ********** 2026-04-05 03:25:29.794834 | orchestrator | 2026-04-05 03:25:29.794842 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 03:25:29.794851 | orchestrator | Sunday 05 April 2026 03:23:52 +0000 (0:00:00.072) 0:03:03.768 ********** 2026-04-05 03:25:29.794859 | orchestrator | 2026-04-05 03:25:29.794867 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 03:25:29.794876 | orchestrator | Sunday 05 April 2026 03:23:52 +0000 (0:00:00.074) 0:03:03.843 ********** 2026-04-05 03:25:29.794884 | orchestrator | 2026-04-05 03:25:29.794907 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 03:25:29.794916 | orchestrator | Sunday 05 April 2026 03:23:52 +0000 (0:00:00.070) 0:03:03.914 ********** 2026-04-05 03:25:29.794924 | orchestrator | 2026-04-05 03:25:29.794932 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 03:25:29.794941 | orchestrator | Sunday 05 April 2026 03:23:52 +0000 (0:00:00.072) 0:03:03.986 ********** 2026-04-05 03:25:29.794949 | orchestrator | 2026-04-05 03:25:29.794957 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-05 03:25:29.794965 | orchestrator | Sunday 05 April 2026 03:23:52 +0000 (0:00:00.073) 0:03:04.060 ********** 2026-04-05 03:25:29.794994 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:25:29.795002 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:25:29.795010 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:25:29.795019 | orchestrator | 2026-04-05 03:25:29.795027 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-05 03:25:29.795035 | orchestrator | Sunday 05 April 2026 03:24:22 +0000 (0:00:29.733) 0:03:33.794 ********** 2026-04-05 03:25:29.795043 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:25:29.795052 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:25:29.795060 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:25:29.795068 | orchestrator | 2026-04-05 03:25:29.795076 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:25:29.795087 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 03:25:29.795097 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-05 03:25:29.795106 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-05 03:25:29.795114 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 03:25:29.795122 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 03:25:29.795130 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-05 03:25:29.795139 | orchestrator | 2026-04-05 03:25:29.795146 | orchestrator | 2026-04-05 03:25:29.795154 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:25:29.795161 | orchestrator | Sunday 05 April 2026 03:25:29 +0000 (0:01:06.793) 0:04:40.587 ********** 2026-04-05 03:25:29.795168 | orchestrator | =============================================================================== 2026-04-05 03:25:29.795180 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 66.79s 2026-04-05 03:25:29.795192 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.13s 2026-04-05 03:25:29.795204 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.73s 2026-04-05 03:25:29.795216 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.34s 2026-04-05 03:25:29.795228 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.75s 2026-04-05 03:25:29.795240 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.40s 2026-04-05 03:25:29.795253 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.10s 2026-04-05 03:25:29.795265 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.01s 2026-04-05 03:25:29.795276 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.96s 2026-04-05 03:25:29.795289 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.88s 2026-04-05 03:25:29.795316 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.53s 2026-04-05 03:25:29.795327 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.36s 2026-04-05 03:25:29.795339 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.32s 2026-04-05 03:25:29.795351 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.23s 2026-04-05 03:25:29.795362 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.22s 2026-04-05 03:25:29.795374 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.11s 2026-04-05 03:25:29.795392 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 2.81s 2026-04-05 03:25:29.795403 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.72s 2026-04-05 03:25:29.795429 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.68s 2026-04-05 03:25:29.795439 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.59s 2026-04-05 03:25:32.234405 | orchestrator | 2026-04-05 03:25:32 | INFO  | Task ac575eb0-6874-4785-8d85-592cd86c3c8f (nova) was prepared for execution. 2026-04-05 03:25:32.234549 | orchestrator | 2026-04-05 03:25:32 | INFO  | It takes a moment until task ac575eb0-6874-4785-8d85-592cd86c3c8f (nova) has been started and output is visible here. 2026-04-05 03:27:39.893002 | orchestrator | 2026-04-05 03:27:39.893135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:27:39.893153 | orchestrator | 2026-04-05 03:27:39.893166 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-05 03:27:39.893177 | orchestrator | Sunday 05 April 2026 03:25:36 +0000 (0:00:00.297) 0:00:00.297 ********** 2026-04-05 03:27:39.893189 | orchestrator | changed: [testbed-manager] 2026-04-05 03:27:39.893202 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.893244 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:27:39.893255 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:27:39.893266 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:27:39.893276 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:27:39.893287 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:27:39.893298 | orchestrator | 2026-04-05 03:27:39.893309 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:27:39.893320 | orchestrator | Sunday 05 April 2026 03:25:37 +0000 (0:00:00.936) 0:00:01.234 ********** 2026-04-05 03:27:39.893331 | orchestrator | changed: [testbed-manager] 2026-04-05 03:27:39.893343 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.893354 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:27:39.893365 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:27:39.893376 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:27:39.893386 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:27:39.893398 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:27:39.893409 | orchestrator | 2026-04-05 03:27:39.893420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:27:39.893431 | orchestrator | Sunday 05 April 2026 03:25:38 +0000 (0:00:00.904) 0:00:02.138 ********** 2026-04-05 03:27:39.893442 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-05 03:27:39.893454 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-05 03:27:39.893465 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-05 03:27:39.893476 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-05 03:27:39.893487 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-05 03:27:39.893497 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-05 03:27:39.893508 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-05 03:27:39.893519 | orchestrator | 2026-04-05 03:27:39.893530 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-05 03:27:39.893543 | orchestrator | 2026-04-05 03:27:39.893557 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 03:27:39.893570 | orchestrator | Sunday 05 April 2026 03:25:39 +0000 (0:00:00.765) 0:00:02.904 ********** 2026-04-05 03:27:39.893583 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:27:39.893596 | orchestrator | 2026-04-05 03:27:39.893609 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-05 03:27:39.893622 | orchestrator | Sunday 05 April 2026 03:25:40 +0000 (0:00:00.807) 0:00:03.711 ********** 2026-04-05 03:27:39.893636 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-05 03:27:39.893673 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-05 03:27:39.893686 | orchestrator | 2026-04-05 03:27:39.893699 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-05 03:27:39.893713 | orchestrator | Sunday 05 April 2026 03:25:45 +0000 (0:00:05.093) 0:00:08.805 ********** 2026-04-05 03:27:39.893725 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:27:39.893739 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:27:39.893752 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.893764 | orchestrator | 2026-04-05 03:27:39.893793 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 03:27:39.893807 | orchestrator | Sunday 05 April 2026 03:25:49 +0000 (0:00:04.395) 0:00:13.201 ********** 2026-04-05 03:27:39.893830 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.893843 | orchestrator | 2026-04-05 03:27:39.893856 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-05 03:27:39.893869 | orchestrator | Sunday 05 April 2026 03:25:50 +0000 (0:00:00.654) 0:00:13.856 ********** 2026-04-05 03:27:39.893881 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.893893 | orchestrator | 2026-04-05 03:27:39.893904 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-05 03:27:39.893914 | orchestrator | Sunday 05 April 2026 03:25:51 +0000 (0:00:01.321) 0:00:15.177 ********** 2026-04-05 03:27:39.893925 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.893936 | orchestrator | 2026-04-05 03:27:39.893947 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 03:27:39.893965 | orchestrator | Sunday 05 April 2026 03:25:54 +0000 (0:00:02.601) 0:00:17.779 ********** 2026-04-05 03:27:39.893984 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:27:39.894001 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.894161 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.894177 | orchestrator | 2026-04-05 03:27:39.894189 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 03:27:39.894200 | orchestrator | Sunday 05 April 2026 03:25:54 +0000 (0:00:00.311) 0:00:18.090 ********** 2026-04-05 03:27:39.894233 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:27:39.894245 | orchestrator | 2026-04-05 03:27:39.894256 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-05 03:27:39.894267 | orchestrator | Sunday 05 April 2026 03:26:28 +0000 (0:00:33.570) 0:00:51.661 ********** 2026-04-05 03:27:39.894278 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.894288 | orchestrator | 2026-04-05 03:27:39.894299 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 03:27:39.894309 | orchestrator | Sunday 05 April 2026 03:26:43 +0000 (0:00:14.842) 0:01:06.503 ********** 2026-04-05 03:27:39.894320 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:27:39.894331 | orchestrator | 2026-04-05 03:27:39.894341 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 03:27:39.894352 | orchestrator | Sunday 05 April 2026 03:26:55 +0000 (0:00:12.915) 0:01:19.419 ********** 2026-04-05 03:27:39.894384 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:27:39.894396 | orchestrator | 2026-04-05 03:27:39.894414 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-05 03:27:39.894426 | orchestrator | Sunday 05 April 2026 03:26:56 +0000 (0:00:00.733) 0:01:20.153 ********** 2026-04-05 03:27:39.894436 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:27:39.894447 | orchestrator | 2026-04-05 03:27:39.894458 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 03:27:39.894469 | orchestrator | Sunday 05 April 2026 03:26:57 +0000 (0:00:00.488) 0:01:20.642 ********** 2026-04-05 03:27:39.894481 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:27:39.894492 | orchestrator | 2026-04-05 03:27:39.894503 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 03:27:39.894526 | orchestrator | Sunday 05 April 2026 03:26:57 +0000 (0:00:00.718) 0:01:21.360 ********** 2026-04-05 03:27:39.894537 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:27:39.894548 | orchestrator | 2026-04-05 03:27:39.894559 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 03:27:39.894569 | orchestrator | Sunday 05 April 2026 03:27:19 +0000 (0:00:21.236) 0:01:42.597 ********** 2026-04-05 03:27:39.894580 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:27:39.894591 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.894602 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.894613 | orchestrator | 2026-04-05 03:27:39.894624 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-05 03:27:39.894634 | orchestrator | 2026-04-05 03:27:39.894645 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 03:27:39.894656 | orchestrator | Sunday 05 April 2026 03:27:19 +0000 (0:00:00.344) 0:01:42.942 ********** 2026-04-05 03:27:39.894667 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:27:39.894678 | orchestrator | 2026-04-05 03:27:39.894688 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-05 03:27:39.894699 | orchestrator | Sunday 05 April 2026 03:27:20 +0000 (0:00:00.858) 0:01:43.800 ********** 2026-04-05 03:27:39.894711 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.894722 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.894733 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.894743 | orchestrator | 2026-04-05 03:27:39.894754 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-05 03:27:39.894765 | orchestrator | Sunday 05 April 2026 03:27:22 +0000 (0:00:02.301) 0:01:46.101 ********** 2026-04-05 03:27:39.894775 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.894786 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.894797 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.894808 | orchestrator | 2026-04-05 03:27:39.894819 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 03:27:39.894830 | orchestrator | Sunday 05 April 2026 03:27:25 +0000 (0:00:02.373) 0:01:48.475 ********** 2026-04-05 03:27:39.894840 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:27:39.894851 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.894862 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.894872 | orchestrator | 2026-04-05 03:27:39.894883 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 03:27:39.894894 | orchestrator | Sunday 05 April 2026 03:27:25 +0000 (0:00:00.586) 0:01:49.062 ********** 2026-04-05 03:27:39.894905 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 03:27:39.894915 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.894926 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 03:27:39.894937 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.894948 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 03:27:39.894959 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-05 03:27:39.894970 | orchestrator | 2026-04-05 03:27:39.894981 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 03:27:39.894992 | orchestrator | Sunday 05 April 2026 03:27:34 +0000 (0:00:08.415) 0:01:57.477 ********** 2026-04-05 03:27:39.895002 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:27:39.895013 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.895024 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.895034 | orchestrator | 2026-04-05 03:27:39.895045 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 03:27:39.895056 | orchestrator | Sunday 05 April 2026 03:27:34 +0000 (0:00:00.380) 0:01:57.858 ********** 2026-04-05 03:27:39.895067 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 03:27:39.895078 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:27:39.895089 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 03:27:39.895106 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.895117 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 03:27:39.895128 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.895139 | orchestrator | 2026-04-05 03:27:39.895149 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 03:27:39.895160 | orchestrator | Sunday 05 April 2026 03:27:35 +0000 (0:00:01.172) 0:01:59.030 ********** 2026-04-05 03:27:39.895171 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.895182 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.895192 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.895258 | orchestrator | 2026-04-05 03:27:39.895271 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-05 03:27:39.895282 | orchestrator | Sunday 05 April 2026 03:27:36 +0000 (0:00:00.541) 0:01:59.571 ********** 2026-04-05 03:27:39.895293 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.895303 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.895314 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:27:39.895325 | orchestrator | 2026-04-05 03:27:39.895336 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-05 03:27:39.895346 | orchestrator | Sunday 05 April 2026 03:27:37 +0000 (0:00:01.187) 0:02:00.758 ********** 2026-04-05 03:27:39.895357 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:27:39.895368 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:27:39.895386 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:29:02.566967 | orchestrator | 2026-04-05 03:29:02.567051 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-05 03:29:02.567062 | orchestrator | Sunday 05 April 2026 03:27:39 +0000 (0:00:02.566) 0:02:03.325 ********** 2026-04-05 03:29:02.567069 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:02.567076 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:02.567082 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:29:02.567118 | orchestrator | 2026-04-05 03:29:02.567125 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 03:29:02.567132 | orchestrator | Sunday 05 April 2026 03:28:02 +0000 (0:00:22.909) 0:02:26.235 ********** 2026-04-05 03:29:02.567138 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:02.567145 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:02.567151 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:29:02.567157 | orchestrator | 2026-04-05 03:29:02.567163 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 03:29:02.567169 | orchestrator | Sunday 05 April 2026 03:28:16 +0000 (0:00:13.401) 0:02:39.636 ********** 2026-04-05 03:29:02.567175 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:29:02.567181 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:02.567186 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:02.567192 | orchestrator | 2026-04-05 03:29:02.567198 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-05 03:29:02.567204 | orchestrator | Sunday 05 April 2026 03:28:17 +0000 (0:00:01.166) 0:02:40.802 ********** 2026-04-05 03:29:02.567210 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:02.567216 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:02.567222 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:29:02.567228 | orchestrator | 2026-04-05 03:29:02.567234 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-05 03:29:02.567240 | orchestrator | Sunday 05 April 2026 03:28:29 +0000 (0:00:12.247) 0:02:53.050 ********** 2026-04-05 03:29:02.567246 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:02.567251 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:02.567257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:02.567263 | orchestrator | 2026-04-05 03:29:02.567269 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 03:29:02.567274 | orchestrator | Sunday 05 April 2026 03:28:30 +0000 (0:00:01.215) 0:02:54.266 ********** 2026-04-05 03:29:02.567301 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:02.567307 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:02.567313 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:02.567319 | orchestrator | 2026-04-05 03:29:02.567324 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-05 03:29:02.567330 | orchestrator | 2026-04-05 03:29:02.567336 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 03:29:02.567342 | orchestrator | Sunday 05 April 2026 03:28:31 +0000 (0:00:00.328) 0:02:54.594 ********** 2026-04-05 03:29:02.567348 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:29:02.567354 | orchestrator | 2026-04-05 03:29:02.567360 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-05 03:29:02.567366 | orchestrator | Sunday 05 April 2026 03:28:31 +0000 (0:00:00.812) 0:02:55.406 ********** 2026-04-05 03:29:02.567372 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-05 03:29:02.567377 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-05 03:29:02.567383 | orchestrator | 2026-04-05 03:29:02.567389 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-05 03:29:02.567395 | orchestrator | Sunday 05 April 2026 03:28:35 +0000 (0:00:03.596) 0:02:59.003 ********** 2026-04-05 03:29:02.567401 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-05 03:29:02.567493 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-05 03:29:02.567511 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-05 03:29:02.567521 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-05 03:29:02.567531 | orchestrator | 2026-04-05 03:29:02.567542 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-05 03:29:02.567552 | orchestrator | Sunday 05 April 2026 03:28:42 +0000 (0:00:06.801) 0:03:05.805 ********** 2026-04-05 03:29:02.567563 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:29:02.567573 | orchestrator | 2026-04-05 03:29:02.567583 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-05 03:29:02.567595 | orchestrator | Sunday 05 April 2026 03:28:45 +0000 (0:00:03.595) 0:03:09.401 ********** 2026-04-05 03:29:02.567602 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:29:02.567609 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-05 03:29:02.567616 | orchestrator | 2026-04-05 03:29:02.567624 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-05 03:29:02.567631 | orchestrator | Sunday 05 April 2026 03:28:49 +0000 (0:00:04.004) 0:03:13.405 ********** 2026-04-05 03:29:02.567638 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:29:02.567645 | orchestrator | 2026-04-05 03:29:02.567653 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-05 03:29:02.567660 | orchestrator | Sunday 05 April 2026 03:28:53 +0000 (0:00:03.492) 0:03:16.897 ********** 2026-04-05 03:29:02.567668 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-05 03:29:02.567676 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-05 03:29:02.567683 | orchestrator | 2026-04-05 03:29:02.567690 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 03:29:02.567713 | orchestrator | Sunday 05 April 2026 03:29:01 +0000 (0:00:07.756) 0:03:24.654 ********** 2026-04-05 03:29:02.567729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:02.567751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:02.567760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:02.567777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:07.326915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:07.327013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:07.327020 | orchestrator | 2026-04-05 03:29:07.327026 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-05 03:29:07.327031 | orchestrator | Sunday 05 April 2026 03:29:02 +0000 (0:00:01.349) 0:03:26.003 ********** 2026-04-05 03:29:07.327035 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:07.327041 | orchestrator | 2026-04-05 03:29:07.327045 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-05 03:29:07.327049 | orchestrator | Sunday 05 April 2026 03:29:02 +0000 (0:00:00.151) 0:03:26.155 ********** 2026-04-05 03:29:07.327053 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:07.327057 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:07.327061 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:07.327065 | orchestrator | 2026-04-05 03:29:07.327069 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-05 03:29:07.327073 | orchestrator | Sunday 05 April 2026 03:29:03 +0000 (0:00:00.327) 0:03:26.482 ********** 2026-04-05 03:29:07.327077 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:29:07.327081 | orchestrator | 2026-04-05 03:29:07.327108 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-05 03:29:07.327112 | orchestrator | Sunday 05 April 2026 03:29:03 +0000 (0:00:00.750) 0:03:27.233 ********** 2026-04-05 03:29:07.327116 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:07.327120 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:07.327124 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:07.327128 | orchestrator | 2026-04-05 03:29:07.327131 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 03:29:07.327135 | orchestrator | Sunday 05 April 2026 03:29:04 +0000 (0:00:00.557) 0:03:27.790 ********** 2026-04-05 03:29:07.327140 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:29:07.327145 | orchestrator | 2026-04-05 03:29:07.327149 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 03:29:07.327153 | orchestrator | Sunday 05 April 2026 03:29:04 +0000 (0:00:00.598) 0:03:28.389 ********** 2026-04-05 03:29:07.327159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:07.327204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:07.327210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:07.327214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:07.327219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:07.327229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:07.327233 | orchestrator | 2026-04-05 03:29:07.327240 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 03:29:09.135593 | orchestrator | Sunday 05 April 2026 03:29:07 +0000 (0:00:02.375) 0:03:30.764 ********** 2026-04-05 03:29:09.135792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:09.135833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:09.135855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:09.135880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:09.135920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:09.135948 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:09.135983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:09.136000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:09.136014 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:09.136027 | orchestrator | 2026-04-05 03:29:09.136041 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 03:29:09.136054 | orchestrator | Sunday 05 April 2026 03:29:08 +0000 (0:00:00.937) 0:03:31.701 ********** 2026-04-05 03:29:09.136068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:09.136118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:09.136133 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:09.136163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:11.772907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:11.772983 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:11.772993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:11.773030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:11.773036 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:11.773048 | orchestrator | 2026-04-05 03:29:11.773054 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-05 03:29:11.773061 | orchestrator | Sunday 05 April 2026 03:29:09 +0000 (0:00:00.871) 0:03:32.573 ********** 2026-04-05 03:29:11.773156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:11.773188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:11.773198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:11.773216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:11.773230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:11.773245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:18.316901 | orchestrator | 2026-04-05 03:29:18.317036 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-05 03:29:18.317065 | orchestrator | Sunday 05 April 2026 03:29:11 +0000 (0:00:02.631) 0:03:35.205 ********** 2026-04-05 03:29:18.317183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:18.317227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:18.317258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:18.317294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:18.317308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:18.317328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:18.317375 | orchestrator | 2026-04-05 03:29:18.317390 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-05 03:29:18.317403 | orchestrator | Sunday 05 April 2026 03:29:17 +0000 (0:00:05.943) 0:03:41.148 ********** 2026-04-05 03:29:18.317423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:18.317438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:18.317450 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:18.317477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:22.943271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:22.943378 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:22.943396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-05 03:29:22.943428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:29:22.943439 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:22.943449 | orchestrator | 2026-04-05 03:29:22.943460 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-05 03:29:22.943471 | orchestrator | Sunday 05 April 2026 03:29:18 +0000 (0:00:00.610) 0:03:41.758 ********** 2026-04-05 03:29:22.943481 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:29:22.943491 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:29:22.943501 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:29:22.943510 | orchestrator | 2026-04-05 03:29:22.943520 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-05 03:29:22.943530 | orchestrator | Sunday 05 April 2026 03:29:19 +0000 (0:00:01.600) 0:03:43.359 ********** 2026-04-05 03:29:22.943539 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:29:22.943549 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:29:22.943558 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:29:22.943568 | orchestrator | 2026-04-05 03:29:22.943577 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-05 03:29:22.943596 | orchestrator | Sunday 05 April 2026 03:29:20 +0000 (0:00:00.378) 0:03:43.737 ********** 2026-04-05 03:29:22.943636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:22.943687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:22.943720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-05 03:29:22.943741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:22.943773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:29:22.943804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:09.866371 | orchestrator | 2026-04-05 03:30:09.866483 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 03:30:09.866500 | orchestrator | Sunday 05 April 2026 03:29:22 +0000 (0:00:02.188) 0:03:45.926 ********** 2026-04-05 03:30:09.866511 | orchestrator | 2026-04-05 03:30:09.866523 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 03:30:09.866534 | orchestrator | Sunday 05 April 2026 03:29:22 +0000 (0:00:00.156) 0:03:46.082 ********** 2026-04-05 03:30:09.866545 | orchestrator | 2026-04-05 03:30:09.866556 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 03:30:09.866567 | orchestrator | Sunday 05 April 2026 03:29:22 +0000 (0:00:00.144) 0:03:46.226 ********** 2026-04-05 03:30:09.866578 | orchestrator | 2026-04-05 03:30:09.866589 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-05 03:30:09.866600 | orchestrator | Sunday 05 April 2026 03:29:22 +0000 (0:00:00.148) 0:03:46.375 ********** 2026-04-05 03:30:09.866611 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:30:09.866623 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:30:09.866634 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:30:09.866645 | orchestrator | 2026-04-05 03:30:09.866656 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-05 03:30:09.866667 | orchestrator | Sunday 05 April 2026 03:29:46 +0000 (0:00:24.072) 0:04:10.447 ********** 2026-04-05 03:30:09.866678 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:30:09.866689 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:30:09.866700 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:30:09.866711 | orchestrator | 2026-04-05 03:30:09.866722 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-05 03:30:09.866733 | orchestrator | 2026-04-05 03:30:09.866744 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 03:30:09.866755 | orchestrator | Sunday 05 April 2026 03:29:57 +0000 (0:00:10.530) 0:04:20.978 ********** 2026-04-05 03:30:09.866767 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:30:09.866779 | orchestrator | 2026-04-05 03:30:09.866790 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 03:30:09.866817 | orchestrator | Sunday 05 April 2026 03:29:58 +0000 (0:00:01.306) 0:04:22.284 ********** 2026-04-05 03:30:09.866828 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:09.866839 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:09.866850 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:09.866885 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:09.866898 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:09.866911 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:09.866924 | orchestrator | 2026-04-05 03:30:09.866937 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-05 03:30:09.866949 | orchestrator | Sunday 05 April 2026 03:29:59 +0000 (0:00:00.808) 0:04:23.093 ********** 2026-04-05 03:30:09.866963 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:09.866976 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:09.866988 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:09.867002 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:30:09.867050 | orchestrator | 2026-04-05 03:30:09.867069 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 03:30:09.867089 | orchestrator | Sunday 05 April 2026 03:30:00 +0000 (0:00:00.987) 0:04:24.080 ********** 2026-04-05 03:30:09.867111 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-05 03:30:09.867130 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-05 03:30:09.867150 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-05 03:30:09.867164 | orchestrator | 2026-04-05 03:30:09.867177 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 03:30:09.867190 | orchestrator | Sunday 05 April 2026 03:30:01 +0000 (0:00:00.986) 0:04:25.067 ********** 2026-04-05 03:30:09.867203 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-05 03:30:09.867216 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-05 03:30:09.867230 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-05 03:30:09.867243 | orchestrator | 2026-04-05 03:30:09.867255 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 03:30:09.867268 | orchestrator | Sunday 05 April 2026 03:30:02 +0000 (0:00:01.254) 0:04:26.322 ********** 2026-04-05 03:30:09.867278 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-05 03:30:09.867289 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:09.867300 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-05 03:30:09.867311 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:09.867321 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-05 03:30:09.867332 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:09.867343 | orchestrator | 2026-04-05 03:30:09.867354 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-05 03:30:09.867372 | orchestrator | Sunday 05 April 2026 03:30:03 +0000 (0:00:00.563) 0:04:26.885 ********** 2026-04-05 03:30:09.867400 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 03:30:09.867420 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 03:30:09.867438 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 03:30:09.867455 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 03:30:09.867473 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:09.867490 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 03:30:09.867507 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 03:30:09.867524 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:09.867566 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 03:30:09.867586 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 03:30:09.867606 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 03:30:09.867623 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:09.867641 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 03:30:09.867670 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 03:30:09.867681 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 03:30:09.867692 | orchestrator | 2026-04-05 03:30:09.867703 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-05 03:30:09.867714 | orchestrator | Sunday 05 April 2026 03:30:04 +0000 (0:00:01.211) 0:04:28.097 ********** 2026-04-05 03:30:09.867724 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:09.867744 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:09.867773 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:09.867793 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:30:09.867811 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:30:09.867850 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:30:09.867883 | orchestrator | 2026-04-05 03:30:09.867902 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-05 03:30:09.867920 | orchestrator | Sunday 05 April 2026 03:30:05 +0000 (0:00:01.251) 0:04:29.349 ********** 2026-04-05 03:30:09.867938 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:09.867955 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:09.867971 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:09.867988 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:30:09.868085 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:30:09.868109 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:30:09.868127 | orchestrator | 2026-04-05 03:30:09.868146 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 03:30:09.868163 | orchestrator | Sunday 05 April 2026 03:30:07 +0000 (0:00:01.872) 0:04:31.221 ********** 2026-04-05 03:30:09.868198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:09.868226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:09.868267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:11.774785 | orchestrator | 2026-04-05 03:30:11.774795 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 03:30:11.774805 | orchestrator | Sunday 05 April 2026 03:30:10 +0000 (0:00:02.579) 0:04:33.801 ********** 2026-04-05 03:30:11.774815 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:30:11.774825 | orchestrator | 2026-04-05 03:30:11.774832 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 03:30:11.774847 | orchestrator | Sunday 05 April 2026 03:30:11 +0000 (0:00:01.412) 0:04:35.214 ********** 2026-04-05 03:30:15.381735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:15.381865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:15.381885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:15.381901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:15.381940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:15.381974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:15.381988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:15.382109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:15.382128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:15.382141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:15.382164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:15.382177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:15.382199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:16.936080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:16.936200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:16.936212 | orchestrator | 2026-04-05 03:30:16.936221 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 03:30:16.936230 | orchestrator | Sunday 05 April 2026 03:30:15 +0000 (0:00:03.765) 0:04:38.979 ********** 2026-04-05 03:30:16.936239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:16.936265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:16.936274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:30:16.936281 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:16.936306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:16.936314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:16.936321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:30:16.936333 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:16.936340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:16.936348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:16.936361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:30:18.877437 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:18.877553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:30:18.877575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:30:18.877612 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:18.877626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:30:18.877637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:30:18.877647 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:18.877658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:30:18.877670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:30:18.877681 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:18.877693 | orchestrator | 2026-04-05 03:30:18.877704 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 03:30:18.877712 | orchestrator | Sunday 05 April 2026 03:30:17 +0000 (0:00:01.670) 0:04:40.649 ********** 2026-04-05 03:30:18.877760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:18.877777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:18.877786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:30:18.877793 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:18.877800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:18.877807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:18.877824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:30:26.630260 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:26.630370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:26.630417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:26.630441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:30:26.630460 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:26.630479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:30:26.630500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:30:26.630521 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:26.630585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:30:26.630623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:30:26.630644 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:26.630666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:30:26.630688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:30:26.630710 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:26.630731 | orchestrator | 2026-04-05 03:30:26.630754 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 03:30:26.630777 | orchestrator | Sunday 05 April 2026 03:30:19 +0000 (0:00:02.439) 0:04:43.089 ********** 2026-04-05 03:30:26.630799 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:26.630822 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:26.630843 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:26.630866 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:30:26.630888 | orchestrator | 2026-04-05 03:30:26.630910 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-05 03:30:26.630932 | orchestrator | Sunday 05 April 2026 03:30:20 +0000 (0:00:00.981) 0:04:44.070 ********** 2026-04-05 03:30:26.630953 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:30:26.630975 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 03:30:26.631023 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 03:30:26.631046 | orchestrator | 2026-04-05 03:30:26.631067 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-05 03:30:26.631087 | orchestrator | Sunday 05 April 2026 03:30:21 +0000 (0:00:01.211) 0:04:45.281 ********** 2026-04-05 03:30:26.631107 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:30:26.631126 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 03:30:26.631146 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 03:30:26.631166 | orchestrator | 2026-04-05 03:30:26.631186 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-05 03:30:26.631206 | orchestrator | Sunday 05 April 2026 03:30:22 +0000 (0:00:00.982) 0:04:46.264 ********** 2026-04-05 03:30:26.631238 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:30:26.631259 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:30:26.631279 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:30:26.631298 | orchestrator | 2026-04-05 03:30:26.631318 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-05 03:30:26.631338 | orchestrator | Sunday 05 April 2026 03:30:23 +0000 (0:00:00.543) 0:04:46.807 ********** 2026-04-05 03:30:26.631357 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:30:26.631377 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:30:26.631397 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:30:26.631417 | orchestrator | 2026-04-05 03:30:26.631437 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-05 03:30:26.631456 | orchestrator | Sunday 05 April 2026 03:30:23 +0000 (0:00:00.527) 0:04:47.335 ********** 2026-04-05 03:30:26.631476 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 03:30:26.631496 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 03:30:26.631516 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 03:30:26.631536 | orchestrator | 2026-04-05 03:30:26.631556 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-05 03:30:26.631576 | orchestrator | Sunday 05 April 2026 03:30:25 +0000 (0:00:01.466) 0:04:48.801 ********** 2026-04-05 03:30:26.631614 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 03:30:46.375531 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 03:30:46.375626 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 03:30:46.375637 | orchestrator | 2026-04-05 03:30:46.375646 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-05 03:30:46.375656 | orchestrator | Sunday 05 April 2026 03:30:26 +0000 (0:00:01.269) 0:04:50.070 ********** 2026-04-05 03:30:46.375664 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 03:30:46.375672 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 03:30:46.375680 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 03:30:46.375713 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-05 03:30:46.375721 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-05 03:30:46.375729 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-05 03:30:46.375737 | orchestrator | 2026-04-05 03:30:46.375746 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-05 03:30:46.375754 | orchestrator | Sunday 05 April 2026 03:30:30 +0000 (0:00:04.120) 0:04:54.191 ********** 2026-04-05 03:30:46.375763 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:46.375772 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:46.375780 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:46.375788 | orchestrator | 2026-04-05 03:30:46.375796 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-05 03:30:46.375804 | orchestrator | Sunday 05 April 2026 03:30:31 +0000 (0:00:00.368) 0:04:54.560 ********** 2026-04-05 03:30:46.375812 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:46.375820 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:46.375828 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:46.375836 | orchestrator | 2026-04-05 03:30:46.375844 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-05 03:30:46.375852 | orchestrator | Sunday 05 April 2026 03:30:31 +0000 (0:00:00.580) 0:04:55.141 ********** 2026-04-05 03:30:46.375860 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:30:46.375868 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:30:46.375876 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:30:46.375884 | orchestrator | 2026-04-05 03:30:46.375892 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-05 03:30:46.375900 | orchestrator | Sunday 05 April 2026 03:30:33 +0000 (0:00:01.358) 0:04:56.499 ********** 2026-04-05 03:30:46.375909 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-05 03:30:46.375939 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-05 03:30:46.375947 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-05 03:30:46.375955 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-05 03:30:46.376004 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-05 03:30:46.376013 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-05 03:30:46.376021 | orchestrator | 2026-04-05 03:30:46.376029 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-05 03:30:46.376037 | orchestrator | Sunday 05 April 2026 03:30:36 +0000 (0:00:03.591) 0:05:00.091 ********** 2026-04-05 03:30:46.376045 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 03:30:46.376053 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 03:30:46.376061 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 03:30:46.376069 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-05 03:30:46.376077 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:30:46.376086 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-05 03:30:46.376096 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:30:46.376105 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-05 03:30:46.376114 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:30:46.376124 | orchestrator | 2026-04-05 03:30:46.376133 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-05 03:30:46.376142 | orchestrator | Sunday 05 April 2026 03:30:40 +0000 (0:00:03.631) 0:05:03.723 ********** 2026-04-05 03:30:46.376151 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:46.376160 | orchestrator | 2026-04-05 03:30:46.376170 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-05 03:30:46.376179 | orchestrator | Sunday 05 April 2026 03:30:40 +0000 (0:00:00.145) 0:05:03.868 ********** 2026-04-05 03:30:46.376188 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:46.376198 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:46.376206 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:46.376215 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:46.376225 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:46.376234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:46.376243 | orchestrator | 2026-04-05 03:30:46.376252 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-05 03:30:46.376261 | orchestrator | Sunday 05 April 2026 03:30:41 +0000 (0:00:00.911) 0:05:04.780 ********** 2026-04-05 03:30:46.376271 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:30:46.376280 | orchestrator | 2026-04-05 03:30:46.376289 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-05 03:30:46.376299 | orchestrator | Sunday 05 April 2026 03:30:42 +0000 (0:00:00.720) 0:05:05.501 ********** 2026-04-05 03:30:46.376322 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:30:46.376347 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:30:46.376356 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:30:46.376365 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:30:46.376374 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:30:46.376383 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:30:46.376392 | orchestrator | 2026-04-05 03:30:46.376402 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-05 03:30:46.376411 | orchestrator | Sunday 05 April 2026 03:30:42 +0000 (0:00:00.854) 0:05:06.356 ********** 2026-04-05 03:30:46.376429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:46.376443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:46.376453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:30:46.376465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:46.376486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:51.395217 | orchestrator | 2026-04-05 03:30:51.395222 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-05 03:30:51.395227 | orchestrator | Sunday 05 April 2026 03:30:46 +0000 (0:00:03.765) 0:05:10.121 ********** 2026-04-05 03:30:51.395232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:51.395239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:51.395251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:53.800045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:53.800169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:30:53.800185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:30:53.800197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800306 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:30:53.800397 | orchestrator | 2026-04-05 03:30:53.800411 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-05 03:30:53.800438 | orchestrator | Sunday 05 April 2026 03:30:53 +0000 (0:00:07.111) 0:05:17.232 ********** 2026-04-05 03:31:17.186986 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:31:17.187097 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:31:17.187112 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:31:17.187123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.187132 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.187142 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.187153 | orchestrator | 2026-04-05 03:31:17.187164 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-05 03:31:17.187175 | orchestrator | Sunday 05 April 2026 03:30:55 +0000 (0:00:01.481) 0:05:18.714 ********** 2026-04-05 03:31:17.187185 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 03:31:17.187195 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 03:31:17.187205 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 03:31:17.187221 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 03:31:17.187237 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 03:31:17.187253 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 03:31:17.187270 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 03:31:17.187289 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.187306 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 03:31:17.187324 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.187335 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 03:31:17.187344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.187354 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 03:31:17.187364 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 03:31:17.187398 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 03:31:17.187408 | orchestrator | 2026-04-05 03:31:17.187419 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-05 03:31:17.187429 | orchestrator | Sunday 05 April 2026 03:30:59 +0000 (0:00:03.886) 0:05:22.600 ********** 2026-04-05 03:31:17.187439 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:31:17.187449 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:31:17.187458 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:31:17.187468 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.187479 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.187490 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.187501 | orchestrator | 2026-04-05 03:31:17.187512 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-05 03:31:17.187523 | orchestrator | Sunday 05 April 2026 03:30:59 +0000 (0:00:00.696) 0:05:23.297 ********** 2026-04-05 03:31:17.187534 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 03:31:17.187547 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 03:31:17.187558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 03:31:17.187569 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 03:31:17.187581 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 03:31:17.187593 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 03:31:17.187617 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 03:31:17.187629 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 03:31:17.187641 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 03:31:17.187652 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 03:31:17.187663 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.187675 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 03:31:17.187716 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.187727 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 03:31:17.187739 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.187750 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 03:31:17.187761 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 03:31:17.187790 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 03:31:17.187801 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 03:31:17.187812 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 03:31:17.187824 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 03:31:17.187835 | orchestrator | 2026-04-05 03:31:17.187846 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-05 03:31:17.187855 | orchestrator | Sunday 05 April 2026 03:31:05 +0000 (0:00:05.741) 0:05:29.038 ********** 2026-04-05 03:31:17.187873 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 03:31:17.187883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 03:31:17.187892 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 03:31:17.187902 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 03:31:17.187911 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 03:31:17.187921 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 03:31:17.187955 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 03:31:17.187966 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 03:31:17.187975 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 03:31:17.187985 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 03:31:17.187994 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 03:31:17.188004 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 03:31:17.188026 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 03:31:17.188045 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.188055 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 03:31:17.188065 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.188075 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 03:31:17.188085 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 03:31:17.188095 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.188104 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 03:31:17.188114 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 03:31:17.188123 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 03:31:17.188133 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 03:31:17.188143 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 03:31:17.188152 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 03:31:17.188161 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 03:31:17.188171 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 03:31:17.188180 | orchestrator | 2026-04-05 03:31:17.188190 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-05 03:31:17.188205 | orchestrator | Sunday 05 April 2026 03:31:13 +0000 (0:00:07.577) 0:05:36.616 ********** 2026-04-05 03:31:17.188215 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:31:17.188224 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:31:17.188234 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:31:17.188244 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.188254 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.188270 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.188286 | orchestrator | 2026-04-05 03:31:17.188303 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-05 03:31:17.188321 | orchestrator | Sunday 05 April 2026 03:31:14 +0000 (0:00:00.851) 0:05:37.467 ********** 2026-04-05 03:31:17.188338 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:31:17.188367 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:31:17.188383 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:31:17.188397 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.188407 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.188417 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.188426 | orchestrator | 2026-04-05 03:31:17.188436 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-05 03:31:17.188445 | orchestrator | Sunday 05 April 2026 03:31:14 +0000 (0:00:00.639) 0:05:38.107 ********** 2026-04-05 03:31:17.188455 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:17.188465 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:31:17.188474 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:17.188484 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:31:17.188493 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:17.188503 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:31:17.188513 | orchestrator | 2026-04-05 03:31:17.188530 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-05 03:31:18.398804 | orchestrator | Sunday 05 April 2026 03:31:17 +0000 (0:00:02.508) 0:05:40.616 ********** 2026-04-05 03:31:18.398918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:31:18.399009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:31:18.399032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:31:18.399044 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:31:18.399072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:31:18.399105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:31:18.399134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:31:18.399145 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:31:18.399155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 03:31:18.399169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 03:31:18.399186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 03:31:18.399212 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:31:18.399236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:31:18.399264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:31:22.141489 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:22.141566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:31:22.141576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:31:22.141582 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:22.141587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 03:31:22.141592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 03:31:22.141617 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:22.141622 | orchestrator | 2026-04-05 03:31:22.141628 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-05 03:31:22.141634 | orchestrator | Sunday 05 April 2026 03:31:18 +0000 (0:00:01.450) 0:05:42.066 ********** 2026-04-05 03:31:22.141640 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 03:31:22.141645 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 03:31:22.141650 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:31:22.141665 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 03:31:22.141670 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 03:31:22.141675 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:31:22.141680 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 03:31:22.141685 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 03:31:22.141690 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:31:22.141694 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 03:31:22.141699 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 03:31:22.141704 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:31:22.141709 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 03:31:22.141714 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 03:31:22.141718 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:31:22.141723 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 03:31:22.141728 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 03:31:22.141733 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:31:22.141737 | orchestrator | 2026-04-05 03:31:22.141742 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-05 03:31:22.141747 | orchestrator | Sunday 05 April 2026 03:31:19 +0000 (0:00:01.026) 0:05:43.093 ********** 2026-04-05 03:31:22.141765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:31:22.141772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:31:22.141782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 03:31:22.141789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:31:22.141795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:31:22.141805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433522 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 03:32:20.433535 | orchestrator | 2026-04-05 03:32:20.433540 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 03:32:20.433546 | orchestrator | Sunday 05 April 2026 03:31:22 +0000 (0:00:02.856) 0:05:45.949 ********** 2026-04-05 03:32:20.433550 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:32:20.433556 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:32:20.433560 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:32:20.433564 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:32:20.433567 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:32:20.433571 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:32:20.433575 | orchestrator | 2026-04-05 03:32:20.433579 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 03:32:20.433582 | orchestrator | Sunday 05 April 2026 03:31:23 +0000 (0:00:00.921) 0:05:46.870 ********** 2026-04-05 03:32:20.433586 | orchestrator | 2026-04-05 03:32:20.433590 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 03:32:20.433594 | orchestrator | Sunday 05 April 2026 03:31:23 +0000 (0:00:00.163) 0:05:47.033 ********** 2026-04-05 03:32:20.433597 | orchestrator | 2026-04-05 03:32:20.433601 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 03:32:20.433609 | orchestrator | Sunday 05 April 2026 03:31:23 +0000 (0:00:00.149) 0:05:47.183 ********** 2026-04-05 03:32:20.433612 | orchestrator | 2026-04-05 03:32:20.433616 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 03:32:20.433620 | orchestrator | Sunday 05 April 2026 03:31:23 +0000 (0:00:00.156) 0:05:47.340 ********** 2026-04-05 03:32:20.433624 | orchestrator | 2026-04-05 03:32:20.433627 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 03:32:20.433631 | orchestrator | Sunday 05 April 2026 03:31:24 +0000 (0:00:00.145) 0:05:47.485 ********** 2026-04-05 03:32:20.433635 | orchestrator | 2026-04-05 03:32:20.433638 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 03:32:20.433642 | orchestrator | Sunday 05 April 2026 03:31:24 +0000 (0:00:00.342) 0:05:47.828 ********** 2026-04-05 03:32:20.433646 | orchestrator | 2026-04-05 03:32:20.433650 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-05 03:32:20.433653 | orchestrator | Sunday 05 April 2026 03:31:24 +0000 (0:00:00.153) 0:05:47.981 ********** 2026-04-05 03:32:20.433657 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:32:20.433661 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:32:20.433665 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:32:20.433669 | orchestrator | 2026-04-05 03:32:20.433672 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-05 03:32:20.433676 | orchestrator | Sunday 05 April 2026 03:31:36 +0000 (0:00:12.060) 0:06:00.042 ********** 2026-04-05 03:32:20.433680 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:32:20.433683 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:32:20.433687 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:32:20.433691 | orchestrator | 2026-04-05 03:32:20.433694 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-05 03:32:20.433701 | orchestrator | Sunday 05 April 2026 03:31:55 +0000 (0:00:19.357) 0:06:19.399 ********** 2026-04-05 03:32:20.433705 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:32:20.433709 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:32:20.433713 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:32:20.433717 | orchestrator | 2026-04-05 03:32:20.433723 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-05 03:34:48.441765 | orchestrator | Sunday 05 April 2026 03:32:20 +0000 (0:00:24.467) 0:06:43.866 ********** 2026-04-05 03:34:48.441900 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:34:48.441919 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:34:48.441931 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:34:48.442042 | orchestrator | 2026-04-05 03:34:48.442058 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-05 03:34:48.442071 | orchestrator | Sunday 05 April 2026 03:33:04 +0000 (0:00:43.878) 0:07:27.744 ********** 2026-04-05 03:34:48.442082 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:34:48.442094 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:34:48.442105 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:34:48.442116 | orchestrator | 2026-04-05 03:34:48.442126 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-05 03:34:48.442138 | orchestrator | Sunday 05 April 2026 03:33:05 +0000 (0:00:00.819) 0:07:28.564 ********** 2026-04-05 03:34:48.442149 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:34:48.442160 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:34:48.442177 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:34:48.442196 | orchestrator | 2026-04-05 03:34:48.442221 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-05 03:34:48.442249 | orchestrator | Sunday 05 April 2026 03:33:05 +0000 (0:00:00.810) 0:07:29.374 ********** 2026-04-05 03:34:48.442267 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:34:48.442287 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:34:48.442305 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:34:48.442324 | orchestrator | 2026-04-05 03:34:48.442343 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-05 03:34:48.442362 | orchestrator | Sunday 05 April 2026 03:33:35 +0000 (0:00:30.041) 0:07:59.415 ********** 2026-04-05 03:34:48.442378 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:34:48.442397 | orchestrator | 2026-04-05 03:34:48.442416 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-05 03:34:48.442436 | orchestrator | Sunday 05 April 2026 03:33:36 +0000 (0:00:00.139) 0:07:59.555 ********** 2026-04-05 03:34:48.442456 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:34:48.442476 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:34:48.442497 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:48.442516 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:34:48.442530 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:34:48.442545 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-05 03:34:48.442560 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:34:48.442574 | orchestrator | 2026-04-05 03:34:48.442587 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-05 03:34:48.442600 | orchestrator | Sunday 05 April 2026 03:33:58 +0000 (0:00:22.120) 0:08:21.676 ********** 2026-04-05 03:34:48.442619 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:34:48.442638 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:34:48.442655 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:34:48.442673 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:34:48.442692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:48.442712 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:34:48.442754 | orchestrator | 2026-04-05 03:34:48.442769 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-05 03:34:48.442825 | orchestrator | Sunday 05 April 2026 03:34:08 +0000 (0:00:10.042) 0:08:31.719 ********** 2026-04-05 03:34:48.442849 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:34:48.442867 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:34:48.442884 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:34:48.442902 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:48.442921 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:34:48.442941 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-05 03:34:48.442960 | orchestrator | 2026-04-05 03:34:48.442997 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 03:34:48.443017 | orchestrator | Sunday 05 April 2026 03:34:13 +0000 (0:00:04.907) 0:08:36.626 ********** 2026-04-05 03:34:48.443036 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:34:48.443055 | orchestrator | 2026-04-05 03:34:48.443074 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 03:34:48.443114 | orchestrator | Sunday 05 April 2026 03:34:27 +0000 (0:00:14.035) 0:08:50.662 ********** 2026-04-05 03:34:48.443147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:34:48.443166 | orchestrator | 2026-04-05 03:34:48.443185 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-05 03:34:48.443204 | orchestrator | Sunday 05 April 2026 03:34:28 +0000 (0:00:01.645) 0:08:52.308 ********** 2026-04-05 03:34:48.443222 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:34:48.443242 | orchestrator | 2026-04-05 03:34:48.443261 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-05 03:34:48.443280 | orchestrator | Sunday 05 April 2026 03:34:30 +0000 (0:00:01.841) 0:08:54.150 ********** 2026-04-05 03:34:48.443298 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 03:34:48.443317 | orchestrator | 2026-04-05 03:34:48.443336 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-05 03:34:48.443354 | orchestrator | Sunday 05 April 2026 03:34:42 +0000 (0:00:11.944) 0:09:06.094 ********** 2026-04-05 03:34:48.443373 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:34:48.443392 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:34:48.443411 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:34:48.443430 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:34:48.443449 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:34:48.443467 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:34:48.443484 | orchestrator | 2026-04-05 03:34:48.443503 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-05 03:34:48.443522 | orchestrator | 2026-04-05 03:34:48.443541 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-05 03:34:48.443586 | orchestrator | Sunday 05 April 2026 03:34:44 +0000 (0:00:01.922) 0:09:08.017 ********** 2026-04-05 03:34:48.443605 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:34:48.443622 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:34:48.443638 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:34:48.443655 | orchestrator | 2026-04-05 03:34:48.443673 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-05 03:34:48.443691 | orchestrator | 2026-04-05 03:34:48.443709 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-05 03:34:48.443754 | orchestrator | Sunday 05 April 2026 03:34:45 +0000 (0:00:01.009) 0:09:09.027 ********** 2026-04-05 03:34:48.443771 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:48.443787 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:34:48.443807 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:34:48.443825 | orchestrator | 2026-04-05 03:34:48.443843 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-05 03:34:48.443863 | orchestrator | 2026-04-05 03:34:48.443881 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-05 03:34:48.443899 | orchestrator | Sunday 05 April 2026 03:34:46 +0000 (0:00:00.786) 0:09:09.813 ********** 2026-04-05 03:34:48.443934 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-05 03:34:48.443953 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 03:34:48.443973 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 03:34:48.443989 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-05 03:34:48.444000 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-05 03:34:48.444011 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-05 03:34:48.444022 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:34:48.444033 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-05 03:34:48.444044 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 03:34:48.444054 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 03:34:48.444065 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-05 03:34:48.444076 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-05 03:34:48.444086 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-05 03:34:48.444097 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:34:48.444108 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-05 03:34:48.444119 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 03:34:48.444129 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 03:34:48.444140 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-05 03:34:48.444150 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-05 03:34:48.444161 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-05 03:34:48.444172 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:34:48.444183 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-05 03:34:48.444193 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 03:34:48.444204 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 03:34:48.444214 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-05 03:34:48.444225 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-05 03:34:48.444236 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-05 03:34:48.444246 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:48.444257 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-05 03:34:48.444268 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 03:34:48.444285 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 03:34:48.444297 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-05 03:34:48.444307 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-05 03:34:48.444318 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-05 03:34:48.444328 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:34:48.444339 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-05 03:34:48.444350 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 03:34:48.444361 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 03:34:48.444372 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-05 03:34:48.444382 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-05 03:34:48.444393 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-05 03:34:48.444404 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:34:48.444414 | orchestrator | 2026-04-05 03:34:48.444425 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-05 03:34:48.444440 | orchestrator | 2026-04-05 03:34:48.444459 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-05 03:34:48.444486 | orchestrator | Sunday 05 April 2026 03:34:47 +0000 (0:00:01.443) 0:09:11.256 ********** 2026-04-05 03:34:48.444503 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-05 03:34:48.444519 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-05 03:34:48.444537 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:48.444553 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-05 03:34:48.444570 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-05 03:34:48.444587 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:34:48.444604 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-05 03:34:48.444622 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-05 03:34:48.444640 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:34:48.444657 | orchestrator | 2026-04-05 03:34:48.444693 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-05 03:34:50.344930 | orchestrator | 2026-04-05 03:34:50.345092 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-05 03:34:50.345124 | orchestrator | Sunday 05 April 2026 03:34:48 +0000 (0:00:00.617) 0:09:11.873 ********** 2026-04-05 03:34:50.345145 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:50.345167 | orchestrator | 2026-04-05 03:34:50.345186 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-05 03:34:50.345206 | orchestrator | 2026-04-05 03:34:50.345226 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-05 03:34:50.345244 | orchestrator | Sunday 05 April 2026 03:34:49 +0000 (0:00:00.931) 0:09:12.805 ********** 2026-04-05 03:34:50.345256 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:34:50.345267 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:34:50.345278 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:34:50.345289 | orchestrator | 2026-04-05 03:34:50.345300 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:34:50.345311 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:34:50.345326 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-04-05 03:34:50.345337 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-05 03:34:50.345348 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-05 03:34:50.345361 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 03:34:50.345377 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 03:34:50.345396 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 03:34:50.345414 | orchestrator | 2026-04-05 03:34:50.345432 | orchestrator | 2026-04-05 03:34:50.345450 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:34:50.345468 | orchestrator | Sunday 05 April 2026 03:34:49 +0000 (0:00:00.489) 0:09:13.294 ********** 2026-04-05 03:34:50.345485 | orchestrator | =============================================================================== 2026-04-05 03:34:50.345503 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 43.88s 2026-04-05 03:34:50.345519 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.57s 2026-04-05 03:34:50.345538 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.04s 2026-04-05 03:34:50.345593 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.47s 2026-04-05 03:34:50.345614 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.07s 2026-04-05 03:34:50.345632 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.91s 2026-04-05 03:34:50.345650 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.12s 2026-04-05 03:34:50.345687 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 21.24s 2026-04-05 03:34:50.345707 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.36s 2026-04-05 03:34:50.345807 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.84s 2026-04-05 03:34:50.345826 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.04s 2026-04-05 03:34:50.345843 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.40s 2026-04-05 03:34:50.345858 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.92s 2026-04-05 03:34:50.345874 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.25s 2026-04-05 03:34:50.345891 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.06s 2026-04-05 03:34:50.345908 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.94s 2026-04-05 03:34:50.345926 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.53s 2026-04-05 03:34:50.345944 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.04s 2026-04-05 03:34:50.345960 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.42s 2026-04-05 03:34:50.345977 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.76s 2026-04-05 03:34:52.912574 | orchestrator | 2026-04-05 03:34:52 | INFO  | Task 69b4baff-a555-4e55-9f38-c1389dde9f4f (horizon) was prepared for execution. 2026-04-05 03:34:52.912680 | orchestrator | 2026-04-05 03:34:52 | INFO  | It takes a moment until task 69b4baff-a555-4e55-9f38-c1389dde9f4f (horizon) has been started and output is visible here. 2026-04-05 03:35:00.784773 | orchestrator | 2026-04-05 03:35:00.784897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:35:00.784917 | orchestrator | 2026-04-05 03:35:00.784932 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:35:00.784944 | orchestrator | Sunday 05 April 2026 03:34:57 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-04-05 03:35:00.784958 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:00.784973 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:00.784988 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:00.784996 | orchestrator | 2026-04-05 03:35:00.785005 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:35:00.785013 | orchestrator | Sunday 05 April 2026 03:34:57 +0000 (0:00:00.330) 0:00:00.633 ********** 2026-04-05 03:35:00.785021 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-05 03:35:00.785049 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-05 03:35:00.785080 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-05 03:35:00.785094 | orchestrator | 2026-04-05 03:35:00.785109 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-05 03:35:00.785122 | orchestrator | 2026-04-05 03:35:00.785135 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 03:35:00.785148 | orchestrator | Sunday 05 April 2026 03:34:58 +0000 (0:00:00.461) 0:00:01.095 ********** 2026-04-05 03:35:00.785163 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:35:00.785177 | orchestrator | 2026-04-05 03:35:00.785191 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-05 03:35:00.785204 | orchestrator | Sunday 05 April 2026 03:34:58 +0000 (0:00:00.572) 0:00:01.667 ********** 2026-04-05 03:35:00.785273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:35:00.785320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:35:00.785355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:35:00.785371 | orchestrator | 2026-04-05 03:35:00.785385 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-05 03:35:00.785398 | orchestrator | Sunday 05 April 2026 03:35:00 +0000 (0:00:01.297) 0:00:02.965 ********** 2026-04-05 03:35:00.785414 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:00.785429 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:00.785443 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:00.785456 | orchestrator | 2026-04-05 03:35:00.785470 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 03:35:00.785484 | orchestrator | Sunday 05 April 2026 03:35:00 +0000 (0:00:00.548) 0:00:03.514 ********** 2026-04-05 03:35:00.785508 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 03:35:07.418438 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 03:35:07.418545 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 03:35:07.418561 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 03:35:07.418573 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 03:35:07.418584 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 03:35:07.418594 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-05 03:35:07.418605 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 03:35:07.418641 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 03:35:07.418653 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 03:35:07.418664 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 03:35:07.418674 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 03:35:07.418685 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 03:35:07.418744 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 03:35:07.418757 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-05 03:35:07.418768 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 03:35:07.418779 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 03:35:07.418789 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 03:35:07.418800 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 03:35:07.418811 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 03:35:07.418822 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 03:35:07.418832 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 03:35:07.418843 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-05 03:35:07.418854 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 03:35:07.418866 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-05 03:35:07.418879 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-05 03:35:07.418890 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-05 03:35:07.418901 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-05 03:35:07.418927 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-05 03:35:07.418938 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-05 03:35:07.418949 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-05 03:35:07.418959 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-05 03:35:07.418973 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-05 03:35:07.418987 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-05 03:35:07.419000 | orchestrator | 2026-04-05 03:35:07.419014 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:07.419028 | orchestrator | Sunday 05 April 2026 03:35:01 +0000 (0:00:00.828) 0:00:04.342 ********** 2026-04-05 03:35:07.419041 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:07.419064 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:07.419077 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:07.419090 | orchestrator | 2026-04-05 03:35:07.419103 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:07.419116 | orchestrator | Sunday 05 April 2026 03:35:01 +0000 (0:00:00.361) 0:00:04.703 ********** 2026-04-05 03:35:07.419129 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419144 | orchestrator | 2026-04-05 03:35:07.419175 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:07.419188 | orchestrator | Sunday 05 April 2026 03:35:02 +0000 (0:00:00.370) 0:00:05.074 ********** 2026-04-05 03:35:07.419201 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419214 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:07.419227 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:07.419240 | orchestrator | 2026-04-05 03:35:07.419253 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:07.419266 | orchestrator | Sunday 05 April 2026 03:35:02 +0000 (0:00:00.337) 0:00:05.412 ********** 2026-04-05 03:35:07.419280 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:07.419294 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:07.419306 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:07.419319 | orchestrator | 2026-04-05 03:35:07.419332 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:07.419346 | orchestrator | Sunday 05 April 2026 03:35:02 +0000 (0:00:00.372) 0:00:05.784 ********** 2026-04-05 03:35:07.419359 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419372 | orchestrator | 2026-04-05 03:35:07.419385 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:07.419398 | orchestrator | Sunday 05 April 2026 03:35:03 +0000 (0:00:00.133) 0:00:05.918 ********** 2026-04-05 03:35:07.419412 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419424 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:07.419435 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:07.419446 | orchestrator | 2026-04-05 03:35:07.419457 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:07.419468 | orchestrator | Sunday 05 April 2026 03:35:03 +0000 (0:00:00.322) 0:00:06.240 ********** 2026-04-05 03:35:07.419479 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:07.419490 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:07.419500 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:07.419511 | orchestrator | 2026-04-05 03:35:07.419522 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:07.419533 | orchestrator | Sunday 05 April 2026 03:35:04 +0000 (0:00:00.599) 0:00:06.840 ********** 2026-04-05 03:35:07.419544 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419554 | orchestrator | 2026-04-05 03:35:07.419565 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:07.419576 | orchestrator | Sunday 05 April 2026 03:35:04 +0000 (0:00:00.146) 0:00:06.987 ********** 2026-04-05 03:35:07.419587 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419598 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:07.419609 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:07.419620 | orchestrator | 2026-04-05 03:35:07.419631 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:07.419641 | orchestrator | Sunday 05 April 2026 03:35:04 +0000 (0:00:00.357) 0:00:07.344 ********** 2026-04-05 03:35:07.419652 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:07.419663 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:07.419674 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:07.419685 | orchestrator | 2026-04-05 03:35:07.419718 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:07.419730 | orchestrator | Sunday 05 April 2026 03:35:04 +0000 (0:00:00.356) 0:00:07.701 ********** 2026-04-05 03:35:07.419741 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419752 | orchestrator | 2026-04-05 03:35:07.419770 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:07.419781 | orchestrator | Sunday 05 April 2026 03:35:05 +0000 (0:00:00.161) 0:00:07.862 ********** 2026-04-05 03:35:07.419792 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419803 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:07.419814 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:07.419825 | orchestrator | 2026-04-05 03:35:07.419836 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:07.419847 | orchestrator | Sunday 05 April 2026 03:35:05 +0000 (0:00:00.549) 0:00:08.412 ********** 2026-04-05 03:35:07.419858 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:07.419869 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:07.419879 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:07.419895 | orchestrator | 2026-04-05 03:35:07.419906 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:07.419917 | orchestrator | Sunday 05 April 2026 03:35:05 +0000 (0:00:00.359) 0:00:08.772 ********** 2026-04-05 03:35:07.419928 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419939 | orchestrator | 2026-04-05 03:35:07.419950 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:07.419961 | orchestrator | Sunday 05 April 2026 03:35:06 +0000 (0:00:00.133) 0:00:08.905 ********** 2026-04-05 03:35:07.419971 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.419982 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:07.419993 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:07.420004 | orchestrator | 2026-04-05 03:35:07.420015 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:07.420026 | orchestrator | Sunday 05 April 2026 03:35:06 +0000 (0:00:00.310) 0:00:09.216 ********** 2026-04-05 03:35:07.420037 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:07.420048 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:07.420059 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:07.420069 | orchestrator | 2026-04-05 03:35:07.420080 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:07.420091 | orchestrator | Sunday 05 April 2026 03:35:06 +0000 (0:00:00.341) 0:00:09.557 ********** 2026-04-05 03:35:07.420102 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.420113 | orchestrator | 2026-04-05 03:35:07.420123 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:07.420134 | orchestrator | Sunday 05 April 2026 03:35:07 +0000 (0:00:00.361) 0:00:09.919 ********** 2026-04-05 03:35:07.420145 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:07.420156 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:07.420167 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:07.420178 | orchestrator | 2026-04-05 03:35:07.420189 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:07.420207 | orchestrator | Sunday 05 April 2026 03:35:07 +0000 (0:00:00.327) 0:00:10.246 ********** 2026-04-05 03:35:22.430386 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:22.430512 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:22.430535 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:22.430583 | orchestrator | 2026-04-05 03:35:22.430600 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:22.430615 | orchestrator | Sunday 05 April 2026 03:35:07 +0000 (0:00:00.337) 0:00:10.584 ********** 2026-04-05 03:35:22.430630 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.430644 | orchestrator | 2026-04-05 03:35:22.430658 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:22.430672 | orchestrator | Sunday 05 April 2026 03:35:07 +0000 (0:00:00.142) 0:00:10.726 ********** 2026-04-05 03:35:22.430786 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.430804 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:22.430816 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:22.430825 | orchestrator | 2026-04-05 03:35:22.430833 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:22.430866 | orchestrator | Sunday 05 April 2026 03:35:08 +0000 (0:00:00.368) 0:00:11.094 ********** 2026-04-05 03:35:22.430874 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:22.430882 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:22.430890 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:22.430898 | orchestrator | 2026-04-05 03:35:22.430906 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:22.430914 | orchestrator | Sunday 05 April 2026 03:35:08 +0000 (0:00:00.602) 0:00:11.697 ********** 2026-04-05 03:35:22.430922 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.430929 | orchestrator | 2026-04-05 03:35:22.430939 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:22.430948 | orchestrator | Sunday 05 April 2026 03:35:09 +0000 (0:00:00.192) 0:00:11.890 ********** 2026-04-05 03:35:22.430957 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.430966 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:22.430975 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:22.430984 | orchestrator | 2026-04-05 03:35:22.430993 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:22.431002 | orchestrator | Sunday 05 April 2026 03:35:09 +0000 (0:00:00.335) 0:00:12.225 ********** 2026-04-05 03:35:22.431011 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:22.431021 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:22.431031 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:22.431040 | orchestrator | 2026-04-05 03:35:22.431050 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:22.431059 | orchestrator | Sunday 05 April 2026 03:35:09 +0000 (0:00:00.389) 0:00:12.615 ********** 2026-04-05 03:35:22.431068 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.431077 | orchestrator | 2026-04-05 03:35:22.431087 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:22.431095 | orchestrator | Sunday 05 April 2026 03:35:09 +0000 (0:00:00.154) 0:00:12.769 ********** 2026-04-05 03:35:22.431103 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.431111 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:22.431118 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:22.431126 | orchestrator | 2026-04-05 03:35:22.431134 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 03:35:22.431142 | orchestrator | Sunday 05 April 2026 03:35:10 +0000 (0:00:00.588) 0:00:13.358 ********** 2026-04-05 03:35:22.431149 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:35:22.431157 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:35:22.431165 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:35:22.431173 | orchestrator | 2026-04-05 03:35:22.431180 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 03:35:22.431188 | orchestrator | Sunday 05 April 2026 03:35:10 +0000 (0:00:00.388) 0:00:13.747 ********** 2026-04-05 03:35:22.431196 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.431204 | orchestrator | 2026-04-05 03:35:22.431212 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 03:35:22.431219 | orchestrator | Sunday 05 April 2026 03:35:11 +0000 (0:00:00.159) 0:00:13.906 ********** 2026-04-05 03:35:22.431227 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.431247 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:22.431255 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:22.431263 | orchestrator | 2026-04-05 03:35:22.431271 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-05 03:35:22.431279 | orchestrator | Sunday 05 April 2026 03:35:11 +0000 (0:00:00.315) 0:00:14.221 ********** 2026-04-05 03:35:22.431287 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:35:22.431295 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:35:22.431308 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:35:22.431321 | orchestrator | 2026-04-05 03:35:22.431333 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-05 03:35:22.431355 | orchestrator | Sunday 05 April 2026 03:35:13 +0000 (0:00:01.826) 0:00:16.048 ********** 2026-04-05 03:35:22.431368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 03:35:22.431382 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 03:35:22.431394 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 03:35:22.431407 | orchestrator | 2026-04-05 03:35:22.431420 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-05 03:35:22.431434 | orchestrator | Sunday 05 April 2026 03:35:15 +0000 (0:00:01.997) 0:00:18.045 ********** 2026-04-05 03:35:22.431447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 03:35:22.431462 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 03:35:22.431476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 03:35:22.431489 | orchestrator | 2026-04-05 03:35:22.431501 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-05 03:35:22.431527 | orchestrator | Sunday 05 April 2026 03:35:17 +0000 (0:00:01.910) 0:00:19.956 ********** 2026-04-05 03:35:22.431535 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 03:35:22.431543 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 03:35:22.431551 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 03:35:22.431559 | orchestrator | 2026-04-05 03:35:22.431567 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-05 03:35:22.431575 | orchestrator | Sunday 05 April 2026 03:35:18 +0000 (0:00:01.622) 0:00:21.578 ********** 2026-04-05 03:35:22.431582 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.431590 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:22.431598 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:22.431606 | orchestrator | 2026-04-05 03:35:22.431614 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-05 03:35:22.431621 | orchestrator | Sunday 05 April 2026 03:35:19 +0000 (0:00:00.570) 0:00:22.148 ********** 2026-04-05 03:35:22.431629 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:22.431637 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:22.431644 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:22.431652 | orchestrator | 2026-04-05 03:35:22.431660 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 03:35:22.431668 | orchestrator | Sunday 05 April 2026 03:35:19 +0000 (0:00:00.365) 0:00:22.513 ********** 2026-04-05 03:35:22.431676 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:35:22.431707 | orchestrator | 2026-04-05 03:35:22.431719 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-05 03:35:22.431727 | orchestrator | Sunday 05 April 2026 03:35:20 +0000 (0:00:00.675) 0:00:23.189 ********** 2026-04-05 03:35:22.431747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:35:22.431776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:35:23.093761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:35:23.093861 | orchestrator | 2026-04-05 03:35:23.093872 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-05 03:35:23.093881 | orchestrator | Sunday 05 April 2026 03:35:22 +0000 (0:00:02.065) 0:00:25.254 ********** 2026-04-05 03:35:23.093905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 03:35:23.093920 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:23.093934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 03:35:23.093942 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:23.093956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 03:35:25.802977 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:25.803071 | orchestrator | 2026-04-05 03:35:25.803083 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-05 03:35:25.803094 | orchestrator | Sunday 05 April 2026 03:35:23 +0000 (0:00:00.667) 0:00:25.922 ********** 2026-04-05 03:35:25.803118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 03:35:25.803126 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:35:25.803145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 03:35:25.803165 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:35:25.803170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 03:35:25.803175 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:35:25.803179 | orchestrator | 2026-04-05 03:35:25.803184 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-05 03:35:25.803207 | orchestrator | Sunday 05 April 2026 03:35:23 +0000 (0:00:00.876) 0:00:26.798 ********** 2026-04-05 03:35:25.803220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:36:16.684193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:36:16.684333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 03:36:16.684347 | orchestrator | 2026-04-05 03:36:16.684355 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 03:36:16.684364 | orchestrator | Sunday 05 April 2026 03:35:25 +0000 (0:00:01.822) 0:00:28.620 ********** 2026-04-05 03:36:16.684371 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:36:16.684379 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:36:16.684386 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:36:16.684393 | orchestrator | 2026-04-05 03:36:16.684399 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 03:36:16.684406 | orchestrator | Sunday 05 April 2026 03:35:26 +0000 (0:00:00.330) 0:00:28.950 ********** 2026-04-05 03:36:16.684414 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:36:16.684421 | orchestrator | 2026-04-05 03:36:16.684427 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-05 03:36:16.684434 | orchestrator | Sunday 05 April 2026 03:35:26 +0000 (0:00:00.707) 0:00:29.657 ********** 2026-04-05 03:36:16.684441 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:36:16.684447 | orchestrator | 2026-04-05 03:36:16.684454 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-05 03:36:16.684461 | orchestrator | Sunday 05 April 2026 03:35:29 +0000 (0:00:02.283) 0:00:31.941 ********** 2026-04-05 03:36:16.684467 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:36:16.684474 | orchestrator | 2026-04-05 03:36:16.684480 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-05 03:36:16.684487 | orchestrator | Sunday 05 April 2026 03:35:31 +0000 (0:00:02.776) 0:00:34.718 ********** 2026-04-05 03:36:16.684494 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:36:16.684500 | orchestrator | 2026-04-05 03:36:16.684512 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 03:36:16.684519 | orchestrator | Sunday 05 April 2026 03:35:49 +0000 (0:00:17.355) 0:00:52.073 ********** 2026-04-05 03:36:16.684526 | orchestrator | 2026-04-05 03:36:16.684532 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 03:36:16.684539 | orchestrator | Sunday 05 April 2026 03:35:49 +0000 (0:00:00.067) 0:00:52.141 ********** 2026-04-05 03:36:16.684545 | orchestrator | 2026-04-05 03:36:16.684552 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 03:36:16.684558 | orchestrator | Sunday 05 April 2026 03:35:49 +0000 (0:00:00.076) 0:00:52.217 ********** 2026-04-05 03:36:16.684565 | orchestrator | 2026-04-05 03:36:16.684571 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-05 03:36:16.684578 | orchestrator | Sunday 05 April 2026 03:35:49 +0000 (0:00:00.078) 0:00:52.295 ********** 2026-04-05 03:36:16.684584 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:36:16.684591 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:36:16.684598 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:36:16.684604 | orchestrator | 2026-04-05 03:36:16.684611 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:36:16.684619 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 03:36:16.684627 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-05 03:36:16.684633 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-05 03:36:16.684685 | orchestrator | 2026-04-05 03:36:16.684692 | orchestrator | 2026-04-05 03:36:16.684699 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:36:16.684705 | orchestrator | Sunday 05 April 2026 03:36:16 +0000 (0:00:27.190) 0:01:19.486 ********** 2026-04-05 03:36:16.684712 | orchestrator | =============================================================================== 2026-04-05 03:36:16.684719 | orchestrator | horizon : Restart horizon container ------------------------------------ 27.19s 2026-04-05 03:36:16.684727 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.36s 2026-04-05 03:36:16.684735 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.78s 2026-04-05 03:36:16.684743 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.28s 2026-04-05 03:36:16.684750 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.07s 2026-04-05 03:36:16.684762 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.00s 2026-04-05 03:36:16.684770 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.91s 2026-04-05 03:36:16.684778 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.83s 2026-04-05 03:36:16.684785 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.82s 2026-04-05 03:36:16.684793 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.62s 2026-04-05 03:36:16.684801 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.30s 2026-04-05 03:36:16.684809 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-04-05 03:36:16.684817 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-04-05 03:36:16.684830 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-04-05 03:36:17.119747 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2026-04-05 03:36:17.119865 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-04-05 03:36:17.119882 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-04-05 03:36:17.119920 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-04-05 03:36:17.119931 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2026-04-05 03:36:17.119943 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-04-05 03:36:19.658271 | orchestrator | 2026-04-05 03:36:19 | INFO  | Task 920a6635-b9d6-4750-8650-7d3a8ed202d4 (skyline) was prepared for execution. 2026-04-05 03:36:19.658362 | orchestrator | 2026-04-05 03:36:19 | INFO  | It takes a moment until task 920a6635-b9d6-4750-8650-7d3a8ed202d4 (skyline) has been started and output is visible here. 2026-04-05 03:36:52.041964 | orchestrator | 2026-04-05 03:36:52.042175 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:36:52.042199 | orchestrator | 2026-04-05 03:36:52.042211 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:36:52.042223 | orchestrator | Sunday 05 April 2026 03:36:24 +0000 (0:00:00.279) 0:00:00.279 ********** 2026-04-05 03:36:52.042234 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:36:52.042247 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:36:52.042258 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:36:52.042268 | orchestrator | 2026-04-05 03:36:52.042279 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:36:52.042291 | orchestrator | Sunday 05 April 2026 03:36:24 +0000 (0:00:00.324) 0:00:00.603 ********** 2026-04-05 03:36:52.042301 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-05 03:36:52.042313 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-05 03:36:52.042323 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-05 03:36:52.042334 | orchestrator | 2026-04-05 03:36:52.042345 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-05 03:36:52.042356 | orchestrator | 2026-04-05 03:36:52.042367 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-05 03:36:52.042378 | orchestrator | Sunday 05 April 2026 03:36:25 +0000 (0:00:00.482) 0:00:01.086 ********** 2026-04-05 03:36:52.042389 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:36:52.042400 | orchestrator | 2026-04-05 03:36:52.042411 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-04-05 03:36:52.042422 | orchestrator | Sunday 05 April 2026 03:36:25 +0000 (0:00:00.587) 0:00:01.674 ********** 2026-04-05 03:36:52.042433 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-04-05 03:36:52.042444 | orchestrator | 2026-04-05 03:36:52.042454 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-04-05 03:36:52.042465 | orchestrator | Sunday 05 April 2026 03:36:29 +0000 (0:00:03.509) 0:00:05.183 ********** 2026-04-05 03:36:52.042475 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-04-05 03:36:52.042487 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-04-05 03:36:52.042500 | orchestrator | 2026-04-05 03:36:52.042514 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-04-05 03:36:52.042527 | orchestrator | Sunday 05 April 2026 03:36:35 +0000 (0:00:06.710) 0:00:11.894 ********** 2026-04-05 03:36:52.042541 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:36:52.042554 | orchestrator | 2026-04-05 03:36:52.042567 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-04-05 03:36:52.042580 | orchestrator | Sunday 05 April 2026 03:36:39 +0000 (0:00:03.376) 0:00:15.270 ********** 2026-04-05 03:36:52.042593 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:36:52.042677 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-04-05 03:36:52.042695 | orchestrator | 2026-04-05 03:36:52.042708 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-04-05 03:36:52.042753 | orchestrator | Sunday 05 April 2026 03:36:43 +0000 (0:00:04.203) 0:00:19.474 ********** 2026-04-05 03:36:52.042767 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:36:52.042780 | orchestrator | 2026-04-05 03:36:52.042793 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-04-05 03:36:52.042806 | orchestrator | Sunday 05 April 2026 03:36:46 +0000 (0:00:03.252) 0:00:22.727 ********** 2026-04-05 03:36:52.042819 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-04-05 03:36:52.042830 | orchestrator | 2026-04-05 03:36:52.042855 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-05 03:36:52.042866 | orchestrator | Sunday 05 April 2026 03:36:50 +0000 (0:00:03.942) 0:00:26.669 ********** 2026-04-05 03:36:52.042881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:52.042918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:52.042931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:52.042944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:52.042971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:52.042993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:56.097970 | orchestrator | 2026-04-05 03:36:56.098093 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-05 03:36:56.098103 | orchestrator | Sunday 05 April 2026 03:36:52 +0000 (0:00:01.314) 0:00:27.983 ********** 2026-04-05 03:36:56.098109 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:36:56.098113 | orchestrator | 2026-04-05 03:36:56.098117 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-05 03:36:56.098122 | orchestrator | Sunday 05 April 2026 03:36:52 +0000 (0:00:00.810) 0:00:28.794 ********** 2026-04-05 03:36:56.098127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:56.098157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:56.098177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:56.098198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:56.098205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:56.098210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:36:56.098219 | orchestrator | 2026-04-05 03:36:56.098223 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-05 03:36:56.098227 | orchestrator | Sunday 05 April 2026 03:36:55 +0000 (0:00:02.540) 0:00:31.335 ********** 2026-04-05 03:36:56.098234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 03:36:56.098238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 03:36:56.098242 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:36:56.098250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.498898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499025 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:36:57.499057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499079 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:36:57.499090 | orchestrator | 2026-04-05 03:36:57.499101 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-05 03:36:57.499113 | orchestrator | Sunday 05 April 2026 03:36:56 +0000 (0:00:00.716) 0:00:32.051 ********** 2026-04-05 03:36:57.499129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499234 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:36:57.499251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499272 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:36:57.499282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-05 03:36:57.499306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-05 03:37:06.321048 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:37:06.321184 | orchestrator | 2026-04-05 03:37:06.321206 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-05 03:37:06.321222 | orchestrator | Sunday 05 April 2026 03:36:57 +0000 (0:00:01.393) 0:00:33.445 ********** 2026-04-05 03:37:06.321257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:06.321271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:06.321281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:06.321341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:06.321368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:06.321381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:06.321390 | orchestrator | 2026-04-05 03:37:06.321397 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-05 03:37:06.321405 | orchestrator | Sunday 05 April 2026 03:37:00 +0000 (0:00:02.518) 0:00:35.964 ********** 2026-04-05 03:37:06.321412 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-05 03:37:06.321420 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-05 03:37:06.321427 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-05 03:37:06.321434 | orchestrator | 2026-04-05 03:37:06.321442 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-05 03:37:06.321449 | orchestrator | Sunday 05 April 2026 03:37:01 +0000 (0:00:01.652) 0:00:37.616 ********** 2026-04-05 03:37:06.321456 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-05 03:37:06.321464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-05 03:37:06.321477 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-05 03:37:06.321484 | orchestrator | 2026-04-05 03:37:06.321491 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-05 03:37:06.321498 | orchestrator | Sunday 05 April 2026 03:37:03 +0000 (0:00:02.256) 0:00:39.872 ********** 2026-04-05 03:37:06.321506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:06.321521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488561 | orchestrator | 2026-04-05 03:37:08.488582 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-05 03:37:08.488697 | orchestrator | Sunday 05 April 2026 03:37:06 +0000 (0:00:02.399) 0:00:42.271 ********** 2026-04-05 03:37:08.488716 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:37:08.488736 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:37:08.488754 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:37:08.488771 | orchestrator | 2026-04-05 03:37:08.488813 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-04-05 03:37:08.488835 | orchestrator | Sunday 05 April 2026 03:37:06 +0000 (0:00:00.357) 0:00:42.629 ********** 2026-04-05 03:37:08.488867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:08.488969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:39.590922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-05 03:37:39.591074 | orchestrator | 2026-04-05 03:37:39.591102 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-04-05 03:37:39.591124 | orchestrator | Sunday 05 April 2026 03:37:08 +0000 (0:00:01.805) 0:00:44.435 ********** 2026-04-05 03:37:39.591142 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:37:39.591161 | orchestrator | 2026-04-05 03:37:39.591180 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-04-05 03:37:39.591199 | orchestrator | Sunday 05 April 2026 03:37:10 +0000 (0:00:02.284) 0:00:46.719 ********** 2026-04-05 03:37:39.591217 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:37:39.591235 | orchestrator | 2026-04-05 03:37:39.591246 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-04-05 03:37:39.591257 | orchestrator | Sunday 05 April 2026 03:37:13 +0000 (0:00:02.470) 0:00:49.190 ********** 2026-04-05 03:37:39.591268 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:37:39.591279 | orchestrator | 2026-04-05 03:37:39.591290 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-05 03:37:39.591301 | orchestrator | Sunday 05 April 2026 03:37:21 +0000 (0:00:08.431) 0:00:57.621 ********** 2026-04-05 03:37:39.591313 | orchestrator | 2026-04-05 03:37:39.591323 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-05 03:37:39.591334 | orchestrator | Sunday 05 April 2026 03:37:21 +0000 (0:00:00.070) 0:00:57.691 ********** 2026-04-05 03:37:39.591345 | orchestrator | 2026-04-05 03:37:39.591356 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-05 03:37:39.591367 | orchestrator | Sunday 05 April 2026 03:37:21 +0000 (0:00:00.070) 0:00:57.762 ********** 2026-04-05 03:37:39.591377 | orchestrator | 2026-04-05 03:37:39.591389 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-05 03:37:39.591399 | orchestrator | Sunday 05 April 2026 03:37:21 +0000 (0:00:00.071) 0:00:57.834 ********** 2026-04-05 03:37:39.591410 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:37:39.591421 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:37:39.591435 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:37:39.591448 | orchestrator | 2026-04-05 03:37:39.591460 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-05 03:37:39.591473 | orchestrator | Sunday 05 April 2026 03:37:29 +0000 (0:00:08.034) 0:01:05.868 ********** 2026-04-05 03:37:39.591486 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:37:39.591499 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:37:39.591512 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:37:39.591524 | orchestrator | 2026-04-05 03:37:39.591537 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:37:39.591551 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 03:37:39.591565 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 03:37:39.591609 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 03:37:39.591622 | orchestrator | 2026-04-05 03:37:39.591635 | orchestrator | 2026-04-05 03:37:39.591648 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:37:39.591661 | orchestrator | Sunday 05 April 2026 03:37:39 +0000 (0:00:09.311) 0:01:15.180 ********** 2026-04-05 03:37:39.591674 | orchestrator | =============================================================================== 2026-04-05 03:37:39.591698 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.31s 2026-04-05 03:37:39.591711 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 8.43s 2026-04-05 03:37:39.591723 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 8.03s 2026-04-05 03:37:39.591736 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.71s 2026-04-05 03:37:39.591764 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.20s 2026-04-05 03:37:39.591777 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.94s 2026-04-05 03:37:39.591789 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.51s 2026-04-05 03:37:39.591802 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.38s 2026-04-05 03:37:39.591834 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.25s 2026-04-05 03:37:39.591848 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.54s 2026-04-05 03:37:39.591868 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.52s 2026-04-05 03:37:39.591886 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.47s 2026-04-05 03:37:39.591904 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.40s 2026-04-05 03:37:39.591923 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.28s 2026-04-05 03:37:39.591942 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.26s 2026-04-05 03:37:39.591959 | orchestrator | skyline : Check skyline container --------------------------------------- 1.81s 2026-04-05 03:37:39.591976 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.65s 2026-04-05 03:37:39.591992 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.39s 2026-04-05 03:37:39.592011 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.31s 2026-04-05 03:37:39.592029 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.81s 2026-04-05 03:37:42.114202 | orchestrator | 2026-04-05 03:37:42 | INFO  | Task 4b76087f-68fc-4b51-856e-3784d1129ae0 (glance) was prepared for execution. 2026-04-05 03:37:42.114307 | orchestrator | 2026-04-05 03:37:42 | INFO  | It takes a moment until task 4b76087f-68fc-4b51-856e-3784d1129ae0 (glance) has been started and output is visible here. 2026-04-05 03:38:17.376372 | orchestrator | 2026-04-05 03:38:17.376491 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:38:17.376510 | orchestrator | 2026-04-05 03:38:17.376530 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:38:17.376614 | orchestrator | Sunday 05 April 2026 03:37:46 +0000 (0:00:00.336) 0:00:00.336 ********** 2026-04-05 03:38:17.376633 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:38:17.376652 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:38:17.376669 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:38:17.376687 | orchestrator | 2026-04-05 03:38:17.376705 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:38:17.376721 | orchestrator | Sunday 05 April 2026 03:37:46 +0000 (0:00:00.307) 0:00:00.643 ********** 2026-04-05 03:38:17.376738 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-05 03:38:17.376756 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-05 03:38:17.376774 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-05 03:38:17.376791 | orchestrator | 2026-04-05 03:38:17.376809 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-05 03:38:17.376828 | orchestrator | 2026-04-05 03:38:17.376846 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 03:38:17.376865 | orchestrator | Sunday 05 April 2026 03:37:47 +0000 (0:00:00.480) 0:00:01.124 ********** 2026-04-05 03:38:17.376917 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:38:17.376939 | orchestrator | 2026-04-05 03:38:17.376956 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-05 03:38:17.376971 | orchestrator | Sunday 05 April 2026 03:37:48 +0000 (0:00:00.576) 0:00:01.701 ********** 2026-04-05 03:38:17.376984 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-05 03:38:17.376997 | orchestrator | 2026-04-05 03:38:17.377010 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-05 03:38:17.377022 | orchestrator | Sunday 05 April 2026 03:37:51 +0000 (0:00:03.495) 0:00:05.197 ********** 2026-04-05 03:38:17.377035 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-05 03:38:17.377048 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-05 03:38:17.377060 | orchestrator | 2026-04-05 03:38:17.377073 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-05 03:38:17.377087 | orchestrator | Sunday 05 April 2026 03:37:58 +0000 (0:00:06.476) 0:00:11.674 ********** 2026-04-05 03:38:17.377099 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:38:17.377113 | orchestrator | 2026-04-05 03:38:17.377126 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-05 03:38:17.377138 | orchestrator | Sunday 05 April 2026 03:38:01 +0000 (0:00:03.562) 0:00:15.236 ********** 2026-04-05 03:38:17.377151 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:38:17.377164 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-05 03:38:17.377175 | orchestrator | 2026-04-05 03:38:17.377186 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-05 03:38:17.377197 | orchestrator | Sunday 05 April 2026 03:38:05 +0000 (0:00:04.274) 0:00:19.510 ********** 2026-04-05 03:38:17.377207 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:38:17.377218 | orchestrator | 2026-04-05 03:38:17.377229 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-05 03:38:17.377240 | orchestrator | Sunday 05 April 2026 03:38:09 +0000 (0:00:03.303) 0:00:22.813 ********** 2026-04-05 03:38:17.377267 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-05 03:38:17.377278 | orchestrator | 2026-04-05 03:38:17.377289 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-05 03:38:17.377308 | orchestrator | Sunday 05 April 2026 03:38:13 +0000 (0:00:03.921) 0:00:26.735 ********** 2026-04-05 03:38:17.377368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:38:17.377411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:38:17.377442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:38:17.377462 | orchestrator | 2026-04-05 03:38:17.377483 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 03:38:17.377502 | orchestrator | Sunday 05 April 2026 03:38:16 +0000 (0:00:03.526) 0:00:30.262 ********** 2026-04-05 03:38:17.377522 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:38:17.377573 | orchestrator | 2026-04-05 03:38:17.377596 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-05 03:38:33.382969 | orchestrator | Sunday 05 April 2026 03:38:17 +0000 (0:00:00.753) 0:00:31.015 ********** 2026-04-05 03:38:33.383074 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:38:33.383090 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:38:33.383101 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:38:33.383111 | orchestrator | 2026-04-05 03:38:33.383122 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-05 03:38:33.383132 | orchestrator | Sunday 05 April 2026 03:38:21 +0000 (0:00:03.723) 0:00:34.739 ********** 2026-04-05 03:38:33.383143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:38:33.383155 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:38:33.383165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:38:33.383174 | orchestrator | 2026-04-05 03:38:33.383184 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-05 03:38:33.383194 | orchestrator | Sunday 05 April 2026 03:38:22 +0000 (0:00:01.579) 0:00:36.318 ********** 2026-04-05 03:38:33.383204 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:38:33.383214 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:38:33.383224 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:38:33.383233 | orchestrator | 2026-04-05 03:38:33.383243 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-05 03:38:33.383253 | orchestrator | Sunday 05 April 2026 03:38:24 +0000 (0:00:01.448) 0:00:37.767 ********** 2026-04-05 03:38:33.383263 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:38:33.383274 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:38:33.383283 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:38:33.383293 | orchestrator | 2026-04-05 03:38:33.383303 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-05 03:38:33.383312 | orchestrator | Sunday 05 April 2026 03:38:24 +0000 (0:00:00.721) 0:00:38.488 ********** 2026-04-05 03:38:33.383322 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:38:33.383332 | orchestrator | 2026-04-05 03:38:33.383343 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-05 03:38:33.383352 | orchestrator | Sunday 05 April 2026 03:38:24 +0000 (0:00:00.150) 0:00:38.639 ********** 2026-04-05 03:38:33.383362 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:38:33.383372 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:38:33.383382 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:38:33.383392 | orchestrator | 2026-04-05 03:38:33.383401 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 03:38:33.383411 | orchestrator | Sunday 05 April 2026 03:38:25 +0000 (0:00:00.313) 0:00:38.952 ********** 2026-04-05 03:38:33.383421 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:38:33.383431 | orchestrator | 2026-04-05 03:38:33.383441 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-05 03:38:33.383450 | orchestrator | Sunday 05 April 2026 03:38:26 +0000 (0:00:00.820) 0:00:39.773 ********** 2026-04-05 03:38:33.383481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:38:33.383571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:38:33.383604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:38:33.383634 | orchestrator | 2026-04-05 03:38:33.383652 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-05 03:38:33.383667 | orchestrator | Sunday 05 April 2026 03:38:30 +0000 (0:00:03.949) 0:00:43.723 ********** 2026-04-05 03:38:33.383695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 03:38:37.083230 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:38:37.083366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 03:38:37.083433 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:38:37.083460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 03:38:37.083478 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:38:37.083496 | orchestrator | 2026-04-05 03:38:37.083514 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-05 03:38:37.083599 | orchestrator | Sunday 05 April 2026 03:38:33 +0000 (0:00:03.299) 0:00:47.022 ********** 2026-04-05 03:38:37.083643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 03:38:37.083680 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:38:37.083712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 03:38:37.083733 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:38:37.083760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 03:39:13.899729 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.899849 | orchestrator | 2026-04-05 03:39:13.899867 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-05 03:39:13.899881 | orchestrator | Sunday 05 April 2026 03:38:37 +0000 (0:00:03.701) 0:00:50.724 ********** 2026-04-05 03:39:13.899892 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:39:13.899930 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:39:13.899948 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.899967 | orchestrator | 2026-04-05 03:39:13.899985 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-05 03:39:13.900004 | orchestrator | Sunday 05 April 2026 03:38:40 +0000 (0:00:03.442) 0:00:54.166 ********** 2026-04-05 03:39:13.900046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:39:13.900064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:39:13.900103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:39:13.900127 | orchestrator | 2026-04-05 03:39:13.900138 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-05 03:39:13.900149 | orchestrator | Sunday 05 April 2026 03:38:44 +0000 (0:00:04.210) 0:00:58.377 ********** 2026-04-05 03:39:13.900160 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:39:13.900171 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:39:13.900182 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:39:13.900193 | orchestrator | 2026-04-05 03:39:13.900204 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-05 03:39:13.900215 | orchestrator | Sunday 05 April 2026 03:38:50 +0000 (0:00:05.906) 0:01:04.284 ********** 2026-04-05 03:39:13.900225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:39:13.900236 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.900247 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:39:13.900258 | orchestrator | 2026-04-05 03:39:13.900269 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-05 03:39:13.900280 | orchestrator | Sunday 05 April 2026 03:38:54 +0000 (0:00:03.983) 0:01:08.267 ********** 2026-04-05 03:39:13.900291 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:39:13.900302 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:39:13.900312 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.900323 | orchestrator | 2026-04-05 03:39:13.900334 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-05 03:39:13.900345 | orchestrator | Sunday 05 April 2026 03:38:58 +0000 (0:00:03.457) 0:01:11.725 ********** 2026-04-05 03:39:13.900356 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:39:13.900367 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.900377 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:39:13.900395 | orchestrator | 2026-04-05 03:39:13.900413 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-05 03:39:13.900430 | orchestrator | Sunday 05 April 2026 03:39:01 +0000 (0:00:03.583) 0:01:15.308 ********** 2026-04-05 03:39:13.900447 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:39:13.900465 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:39:13.900482 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.900575 | orchestrator | 2026-04-05 03:39:13.900598 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-05 03:39:13.900615 | orchestrator | Sunday 05 April 2026 03:39:05 +0000 (0:00:03.553) 0:01:18.862 ********** 2026-04-05 03:39:13.900633 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:39:13.900651 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:39:13.900683 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.900703 | orchestrator | 2026-04-05 03:39:13.900722 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-05 03:39:13.900737 | orchestrator | Sunday 05 April 2026 03:39:05 +0000 (0:00:00.558) 0:01:19.420 ********** 2026-04-05 03:39:13.900748 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 03:39:13.900760 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:39:13.900771 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 03:39:13.900782 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:39:13.900793 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 03:39:13.900803 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:39:13.900814 | orchestrator | 2026-04-05 03:39:13.900824 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-05 03:39:13.900835 | orchestrator | Sunday 05 April 2026 03:39:09 +0000 (0:00:03.651) 0:01:23.071 ********** 2026-04-05 03:39:13.900846 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:39:13.900856 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:39:13.900867 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:39:13.900878 | orchestrator | 2026-04-05 03:39:13.900889 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-05 03:39:13.900912 | orchestrator | Sunday 05 April 2026 03:39:13 +0000 (0:00:04.462) 0:01:27.534 ********** 2026-04-05 03:40:36.344178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:40:36.344353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:40:36.344468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 03:40:36.344488 | orchestrator | 2026-04-05 03:40:36.344505 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 03:40:36.344520 | orchestrator | Sunday 05 April 2026 03:39:17 +0000 (0:00:04.061) 0:01:31.596 ********** 2026-04-05 03:40:36.344534 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:40:36.344549 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:40:36.344562 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:40:36.344574 | orchestrator | 2026-04-05 03:40:36.344589 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-05 03:40:36.344603 | orchestrator | Sunday 05 April 2026 03:39:18 +0000 (0:00:00.548) 0:01:32.144 ********** 2026-04-05 03:40:36.344618 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:40:36.344630 | orchestrator | 2026-04-05 03:40:36.344643 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-05 03:40:36.344656 | orchestrator | Sunday 05 April 2026 03:39:20 +0000 (0:00:02.382) 0:01:34.526 ********** 2026-04-05 03:40:36.344668 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:40:36.344682 | orchestrator | 2026-04-05 03:40:36.344694 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-05 03:40:36.344707 | orchestrator | Sunday 05 April 2026 03:39:23 +0000 (0:00:02.568) 0:01:37.095 ********** 2026-04-05 03:40:36.344720 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:40:36.344743 | orchestrator | 2026-04-05 03:40:36.344755 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-05 03:40:36.344766 | orchestrator | Sunday 05 April 2026 03:39:25 +0000 (0:00:02.479) 0:01:39.575 ********** 2026-04-05 03:40:36.344778 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:40:36.344791 | orchestrator | 2026-04-05 03:40:36.344803 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-05 03:40:36.344814 | orchestrator | Sunday 05 April 2026 03:39:55 +0000 (0:00:30.024) 0:02:09.600 ********** 2026-04-05 03:40:36.344826 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:40:36.344837 | orchestrator | 2026-04-05 03:40:36.344848 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 03:40:36.344861 | orchestrator | Sunday 05 April 2026 03:39:58 +0000 (0:00:02.341) 0:02:11.941 ********** 2026-04-05 03:40:36.344873 | orchestrator | 2026-04-05 03:40:36.344884 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 03:40:36.344895 | orchestrator | Sunday 05 April 2026 03:39:58 +0000 (0:00:00.114) 0:02:12.055 ********** 2026-04-05 03:40:36.344906 | orchestrator | 2026-04-05 03:40:36.344920 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 03:40:36.344933 | orchestrator | Sunday 05 April 2026 03:39:58 +0000 (0:00:00.074) 0:02:12.130 ********** 2026-04-05 03:40:36.344946 | orchestrator | 2026-04-05 03:40:36.344960 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-05 03:40:36.344974 | orchestrator | Sunday 05 April 2026 03:39:58 +0000 (0:00:00.080) 0:02:12.210 ********** 2026-04-05 03:40:36.344986 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:40:36.345000 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:40:36.345013 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:40:36.345027 | orchestrator | 2026-04-05 03:40:36.345041 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:40:36.345055 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 03:40:36.345073 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 03:40:36.345087 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 03:40:36.345102 | orchestrator | 2026-04-05 03:40:36.345117 | orchestrator | 2026-04-05 03:40:36.345131 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:40:36.345145 | orchestrator | Sunday 05 April 2026 03:40:36 +0000 (0:00:37.765) 0:02:49.976 ********** 2026-04-05 03:40:36.345159 | orchestrator | =============================================================================== 2026-04-05 03:40:36.345176 | orchestrator | glance : Restart glance-api container ---------------------------------- 37.77s 2026-04-05 03:40:36.345190 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.02s 2026-04-05 03:40:36.345203 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.48s 2026-04-05 03:40:36.345233 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.91s 2026-04-05 03:40:36.731504 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.46s 2026-04-05 03:40:36.731594 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.27s 2026-04-05 03:40:36.731606 | orchestrator | glance : Copying over config.json files for services -------------------- 4.21s 2026-04-05 03:40:36.731614 | orchestrator | glance : Check glance containers ---------------------------------------- 4.06s 2026-04-05 03:40:36.731621 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.98s 2026-04-05 03:40:36.731628 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.95s 2026-04-05 03:40:36.731635 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.92s 2026-04-05 03:40:36.731678 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.72s 2026-04-05 03:40:36.731686 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.70s 2026-04-05 03:40:36.731692 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.65s 2026-04-05 03:40:36.731699 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.58s 2026-04-05 03:40:36.731705 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.56s 2026-04-05 03:40:36.731711 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.55s 2026-04-05 03:40:36.731717 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.53s 2026-04-05 03:40:36.731722 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.50s 2026-04-05 03:40:36.731728 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.46s 2026-04-05 03:40:39.341780 | orchestrator | 2026-04-05 03:40:39 | INFO  | Task 993def58-3648-4701-be4c-c5964ddd6dbb (cinder) was prepared for execution. 2026-04-05 03:40:39.342833 | orchestrator | 2026-04-05 03:40:39 | INFO  | It takes a moment until task 993def58-3648-4701-be4c-c5964ddd6dbb (cinder) has been started and output is visible here. 2026-04-05 03:41:16.862114 | orchestrator | 2026-04-05 03:41:16.862220 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:41:16.862231 | orchestrator | 2026-04-05 03:41:16.862237 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:41:16.862242 | orchestrator | Sunday 05 April 2026 03:40:43 +0000 (0:00:00.266) 0:00:00.266 ********** 2026-04-05 03:41:16.862247 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:41:16.862253 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:41:16.862258 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:41:16.862311 | orchestrator | 2026-04-05 03:41:16.862316 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:41:16.862322 | orchestrator | Sunday 05 April 2026 03:40:44 +0000 (0:00:00.350) 0:00:00.616 ********** 2026-04-05 03:41:16.862326 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-05 03:41:16.862332 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-05 03:41:16.862337 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-05 03:41:16.862342 | orchestrator | 2026-04-05 03:41:16.862347 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-05 03:41:16.862352 | orchestrator | 2026-04-05 03:41:16.862356 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 03:41:16.862361 | orchestrator | Sunday 05 April 2026 03:40:44 +0000 (0:00:00.458) 0:00:01.074 ********** 2026-04-05 03:41:16.862366 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:41:16.862372 | orchestrator | 2026-04-05 03:41:16.862376 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-05 03:41:16.862381 | orchestrator | Sunday 05 April 2026 03:40:45 +0000 (0:00:00.562) 0:00:01.637 ********** 2026-04-05 03:41:16.862386 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-05 03:41:16.862426 | orchestrator | 2026-04-05 03:41:16.862432 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-05 03:41:16.862437 | orchestrator | Sunday 05 April 2026 03:40:49 +0000 (0:00:03.855) 0:00:05.493 ********** 2026-04-05 03:41:16.862442 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-05 03:41:16.862448 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-05 03:41:16.862452 | orchestrator | 2026-04-05 03:41:16.862457 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-05 03:41:16.862482 | orchestrator | Sunday 05 April 2026 03:40:55 +0000 (0:00:06.616) 0:00:12.110 ********** 2026-04-05 03:41:16.862487 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:41:16.862492 | orchestrator | 2026-04-05 03:41:16.862496 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-05 03:41:16.862501 | orchestrator | Sunday 05 April 2026 03:40:59 +0000 (0:00:03.453) 0:00:15.563 ********** 2026-04-05 03:41:16.862505 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:41:16.862510 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-05 03:41:16.862515 | orchestrator | 2026-04-05 03:41:16.862519 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-05 03:41:16.862524 | orchestrator | Sunday 05 April 2026 03:41:03 +0000 (0:00:04.399) 0:00:19.963 ********** 2026-04-05 03:41:16.862529 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:41:16.862533 | orchestrator | 2026-04-05 03:41:16.862538 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-05 03:41:16.862542 | orchestrator | Sunday 05 April 2026 03:41:07 +0000 (0:00:03.442) 0:00:23.405 ********** 2026-04-05 03:41:16.862547 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-05 03:41:16.862551 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-05 03:41:16.862556 | orchestrator | 2026-04-05 03:41:16.862560 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-05 03:41:16.862565 | orchestrator | Sunday 05 April 2026 03:41:14 +0000 (0:00:07.787) 0:00:31.192 ********** 2026-04-05 03:41:16.862583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:16.862605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:16.862610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:16.862621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:16.862629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:16.862637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:16.862644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:16.862654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:22.866003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:22.866187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:22.866201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:22.866221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:22.866230 | orchestrator | 2026-04-05 03:41:22.866239 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 03:41:22.866249 | orchestrator | Sunday 05 April 2026 03:41:16 +0000 (0:00:02.070) 0:00:33.263 ********** 2026-04-05 03:41:22.866256 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:41:22.866265 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:41:22.866272 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:41:22.866279 | orchestrator | 2026-04-05 03:41:22.866287 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 03:41:22.866294 | orchestrator | Sunday 05 April 2026 03:41:17 +0000 (0:00:00.528) 0:00:33.792 ********** 2026-04-05 03:41:22.866301 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:41:22.866309 | orchestrator | 2026-04-05 03:41:22.866316 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-05 03:41:22.866323 | orchestrator | Sunday 05 April 2026 03:41:18 +0000 (0:00:00.575) 0:00:34.368 ********** 2026-04-05 03:41:22.866331 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-05 03:41:22.866338 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-05 03:41:22.866346 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-05 03:41:22.866353 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-05 03:41:22.866366 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-05 03:41:22.866373 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-05 03:41:22.866381 | orchestrator | 2026-04-05 03:41:22.866441 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-05 03:41:22.866449 | orchestrator | Sunday 05 April 2026 03:41:19 +0000 (0:00:01.675) 0:00:36.043 ********** 2026-04-05 03:41:22.866472 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 03:41:22.866482 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 03:41:22.866495 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 03:41:22.866503 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 03:41:22.866520 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 03:41:34.212167 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-05 03:41:34.212287 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 03:41:34.212311 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 03:41:34.212346 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 03:41:34.212363 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 03:41:34.212491 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 03:41:34.212504 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-05 03:41:34.212513 | orchestrator | 2026-04-05 03:41:34.212522 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-05 03:41:34.212532 | orchestrator | Sunday 05 April 2026 03:41:23 +0000 (0:00:03.506) 0:00:39.549 ********** 2026-04-05 03:41:34.212540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:41:34.212549 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:41:34.212557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-05 03:41:34.212565 | orchestrator | 2026-04-05 03:41:34.212573 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-05 03:41:34.212581 | orchestrator | Sunday 05 April 2026 03:41:24 +0000 (0:00:01.721) 0:00:41.271 ********** 2026-04-05 03:41:34.212590 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-05 03:41:34.212598 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-05 03:41:34.212606 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-05 03:41:34.212614 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 03:41:34.212622 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 03:41:34.212636 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-05 03:41:34.212644 | orchestrator | 2026-04-05 03:41:34.212652 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-05 03:41:34.212660 | orchestrator | Sunday 05 April 2026 03:41:27 +0000 (0:00:02.800) 0:00:44.071 ********** 2026-04-05 03:41:34.212669 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-05 03:41:34.212677 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-05 03:41:34.212691 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-05 03:41:34.212700 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-05 03:41:34.212709 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-05 03:41:34.212719 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-05 03:41:34.212728 | orchestrator | 2026-04-05 03:41:34.212737 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-05 03:41:34.212746 | orchestrator | Sunday 05 April 2026 03:41:28 +0000 (0:00:01.045) 0:00:45.117 ********** 2026-04-05 03:41:34.212756 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:41:34.212766 | orchestrator | 2026-04-05 03:41:34.212775 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-05 03:41:34.212784 | orchestrator | Sunday 05 April 2026 03:41:28 +0000 (0:00:00.155) 0:00:45.273 ********** 2026-04-05 03:41:34.212793 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:41:34.212802 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:41:34.212812 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:41:34.212820 | orchestrator | 2026-04-05 03:41:34.212829 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 03:41:34.212838 | orchestrator | Sunday 05 April 2026 03:41:29 +0000 (0:00:00.537) 0:00:45.810 ********** 2026-04-05 03:41:34.212848 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:41:34.212858 | orchestrator | 2026-04-05 03:41:34.212867 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-05 03:41:34.212876 | orchestrator | Sunday 05 April 2026 03:41:30 +0000 (0:00:00.603) 0:00:46.414 ********** 2026-04-05 03:41:34.212893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:35.200321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:35.200511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:35.200549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:35.200690 | orchestrator | 2026-04-05 03:41:35.200702 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-05 03:41:35.200715 | orchestrator | Sunday 05 April 2026 03:41:34 +0000 (0:00:04.213) 0:00:50.627 ********** 2026-04-05 03:41:35.200735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:35.311108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311341 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:41:35.311362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:35.311451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311568 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:41:35.311588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:35.311608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.311684 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:41:35.311705 | orchestrator | 2026-04-05 03:41:35.311728 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-05 03:41:35.311763 | orchestrator | Sunday 05 April 2026 03:41:35 +0000 (0:00:01.002) 0:00:51.630 ********** 2026-04-05 03:41:35.931323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:35.931523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.931553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.931571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.931583 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:41:35.931596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:35.931663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.931695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.931715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.931736 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:41:35.931757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:35.931777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:35.931822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:40.770264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:41:40.770439 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:41:40.770462 | orchestrator | 2026-04-05 03:41:40.770491 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-05 03:41:40.770505 | orchestrator | Sunday 05 April 2026 03:41:36 +0000 (0:00:00.959) 0:00:52.590 ********** 2026-04-05 03:41:40.770519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:40.770533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:40.770545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:40.770596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:40.770610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:40.770628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:40.770640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:40.770654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:40.770674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:40.770715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.593969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.594181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.594202 | orchestrator | 2026-04-05 03:41:54.594216 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-05 03:41:54.594229 | orchestrator | Sunday 05 April 2026 03:41:40 +0000 (0:00:04.592) 0:00:57.182 ********** 2026-04-05 03:41:54.594243 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-05 03:41:54.594264 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-05 03:41:54.594281 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-05 03:41:54.594299 | orchestrator | 2026-04-05 03:41:54.594317 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-05 03:41:54.594335 | orchestrator | Sunday 05 April 2026 03:41:42 +0000 (0:00:02.112) 0:00:59.294 ********** 2026-04-05 03:41:54.594355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:54.594437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:54.594486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:41:54.594521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.594546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.594566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.594596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.594611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:54.594636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:57.178495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:57.178577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:57.178586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:41:57.178612 | orchestrator | 2026-04-05 03:41:57.178621 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-05 03:41:57.178630 | orchestrator | Sunday 05 April 2026 03:41:54 +0000 (0:00:11.713) 0:01:11.008 ********** 2026-04-05 03:41:57.178637 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:41:57.178645 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:41:57.178652 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:41:57.178659 | orchestrator | 2026-04-05 03:41:57.178666 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-05 03:41:57.178673 | orchestrator | Sunday 05 April 2026 03:41:56 +0000 (0:00:01.575) 0:01:12.584 ********** 2026-04-05 03:41:57.178681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:57.178690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:57.178713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:57.178722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:41:57.178734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:41:57.178741 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:41:57.178748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:41:57.178755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:41:57.178770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:42:00.853472 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:42:00.853558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-05 03:42:00.853586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:42:00.853592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 03:42:00.853598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 03:42:00.853602 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:42:00.853607 | orchestrator | 2026-04-05 03:42:00.853613 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-05 03:42:00.853618 | orchestrator | Sunday 05 April 2026 03:41:57 +0000 (0:00:01.007) 0:01:13.592 ********** 2026-04-05 03:42:00.853623 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:42:00.853627 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:42:00.853631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:42:00.853635 | orchestrator | 2026-04-05 03:42:00.853639 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-05 03:42:00.853643 | orchestrator | Sunday 05 April 2026 03:41:57 +0000 (0:00:00.644) 0:01:14.236 ********** 2026-04-05 03:42:00.853670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:42:00.853680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:42:00.853685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-05 03:42:00.853689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:42:00.853694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:42:00.853701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:42:00.853710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:43:38.915641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:43:38.915754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 03:43:38.915771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:43:38.915784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:43:38.915813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 03:43:38.915864 | orchestrator | 2026-04-05 03:43:38.915889 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 03:43:38.915909 | orchestrator | Sunday 05 April 2026 03:42:00 +0000 (0:00:03.022) 0:01:17.258 ********** 2026-04-05 03:43:38.915927 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:43:38.915947 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:43:38.915964 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:43:38.915982 | orchestrator | 2026-04-05 03:43:38.916001 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-05 03:43:38.916021 | orchestrator | Sunday 05 April 2026 03:42:01 +0000 (0:00:00.344) 0:01:17.603 ********** 2026-04-05 03:43:38.916039 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:43:38.916057 | orchestrator | 2026-04-05 03:43:38.916103 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-05 03:43:38.916126 | orchestrator | Sunday 05 April 2026 03:42:03 +0000 (0:00:02.221) 0:01:19.825 ********** 2026-04-05 03:43:38.916146 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:43:38.916165 | orchestrator | 2026-04-05 03:43:38.916184 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-05 03:43:38.916203 | orchestrator | Sunday 05 April 2026 03:42:05 +0000 (0:00:02.281) 0:01:22.107 ********** 2026-04-05 03:43:38.916223 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:43:38.916241 | orchestrator | 2026-04-05 03:43:38.916260 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 03:43:38.916303 | orchestrator | Sunday 05 April 2026 03:42:26 +0000 (0:00:20.906) 0:01:43.013 ********** 2026-04-05 03:43:38.916326 | orchestrator | 2026-04-05 03:43:38.916345 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 03:43:38.916365 | orchestrator | Sunday 05 April 2026 03:42:26 +0000 (0:00:00.070) 0:01:43.084 ********** 2026-04-05 03:43:38.916384 | orchestrator | 2026-04-05 03:43:38.916402 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 03:43:38.916419 | orchestrator | Sunday 05 April 2026 03:42:26 +0000 (0:00:00.077) 0:01:43.161 ********** 2026-04-05 03:43:38.916431 | orchestrator | 2026-04-05 03:43:38.916442 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-05 03:43:38.916453 | orchestrator | Sunday 05 April 2026 03:42:26 +0000 (0:00:00.071) 0:01:43.233 ********** 2026-04-05 03:43:38.916464 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:43:38.916475 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:43:38.916486 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:43:38.916497 | orchestrator | 2026-04-05 03:43:38.916507 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-05 03:43:38.916518 | orchestrator | Sunday 05 April 2026 03:42:54 +0000 (0:00:27.146) 0:02:10.380 ********** 2026-04-05 03:43:38.916529 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:43:38.916540 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:43:38.916551 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:43:38.916562 | orchestrator | 2026-04-05 03:43:38.916572 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-05 03:43:38.916583 | orchestrator | Sunday 05 April 2026 03:43:04 +0000 (0:00:10.475) 0:02:20.856 ********** 2026-04-05 03:43:38.916594 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:43:38.916605 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:43:38.916616 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:43:38.916626 | orchestrator | 2026-04-05 03:43:38.916637 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-05 03:43:38.916648 | orchestrator | Sunday 05 April 2026 03:43:27 +0000 (0:00:22.750) 0:02:43.606 ********** 2026-04-05 03:43:38.916659 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:43:38.916669 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:43:38.916680 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:43:38.916704 | orchestrator | 2026-04-05 03:43:38.916715 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-05 03:43:38.916727 | orchestrator | Sunday 05 April 2026 03:43:38 +0000 (0:00:11.314) 0:02:54.921 ********** 2026-04-05 03:43:38.916738 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:43:38.916749 | orchestrator | 2026-04-05 03:43:38.916760 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:43:38.916772 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 03:43:38.916785 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 03:43:38.916795 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 03:43:38.916806 | orchestrator | 2026-04-05 03:43:38.916817 | orchestrator | 2026-04-05 03:43:38.916828 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:43:38.916839 | orchestrator | Sunday 05 April 2026 03:43:38 +0000 (0:00:00.297) 0:02:55.219 ********** 2026-04-05 03:43:38.916850 | orchestrator | =============================================================================== 2026-04-05 03:43:38.916860 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.15s 2026-04-05 03:43:38.916871 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.75s 2026-04-05 03:43:38.916882 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.91s 2026-04-05 03:43:38.916893 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.71s 2026-04-05 03:43:38.916912 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.31s 2026-04-05 03:43:38.916923 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.48s 2026-04-05 03:43:38.916934 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.79s 2026-04-05 03:43:38.916944 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.62s 2026-04-05 03:43:38.916955 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.59s 2026-04-05 03:43:38.916966 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.40s 2026-04-05 03:43:38.916976 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.21s 2026-04-05 03:43:38.916987 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.86s 2026-04-05 03:43:38.916998 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.51s 2026-04-05 03:43:38.917008 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.45s 2026-04-05 03:43:38.917029 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.44s 2026-04-05 03:43:39.330855 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.02s 2026-04-05 03:43:39.330922 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.80s 2026-04-05 03:43:39.330929 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.28s 2026-04-05 03:43:39.330941 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.22s 2026-04-05 03:43:39.330946 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.11s 2026-04-05 03:43:41.887181 | orchestrator | 2026-04-05 03:43:41 | INFO  | Task eaa49bec-1a19-462c-af9b-2fb3ec12c4fa (barbican) was prepared for execution. 2026-04-05 03:43:41.887336 | orchestrator | 2026-04-05 03:43:41 | INFO  | It takes a moment until task eaa49bec-1a19-462c-af9b-2fb3ec12c4fa (barbican) has been started and output is visible here. 2026-04-05 03:44:27.483406 | orchestrator | 2026-04-05 03:44:27.483513 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:44:27.483554 | orchestrator | 2026-04-05 03:44:27.483567 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:44:27.483578 | orchestrator | Sunday 05 April 2026 03:43:46 +0000 (0:00:00.279) 0:00:00.279 ********** 2026-04-05 03:44:27.483589 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:44:27.483601 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:44:27.483612 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:44:27.483623 | orchestrator | 2026-04-05 03:44:27.483635 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:44:27.483645 | orchestrator | Sunday 05 April 2026 03:43:46 +0000 (0:00:00.379) 0:00:00.658 ********** 2026-04-05 03:44:27.483656 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-05 03:44:27.483668 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-05 03:44:27.483679 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-05 03:44:27.483690 | orchestrator | 2026-04-05 03:44:27.483700 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-05 03:44:27.483711 | orchestrator | 2026-04-05 03:44:27.483722 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 03:44:27.483739 | orchestrator | Sunday 05 April 2026 03:43:47 +0000 (0:00:00.490) 0:00:01.149 ********** 2026-04-05 03:44:27.483758 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:44:27.483777 | orchestrator | 2026-04-05 03:44:27.483792 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-05 03:44:27.483808 | orchestrator | Sunday 05 April 2026 03:43:47 +0000 (0:00:00.587) 0:00:01.736 ********** 2026-04-05 03:44:27.483824 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-05 03:44:27.483842 | orchestrator | 2026-04-05 03:44:27.483860 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-05 03:44:27.483879 | orchestrator | Sunday 05 April 2026 03:43:51 +0000 (0:00:03.541) 0:00:05.278 ********** 2026-04-05 03:44:27.483897 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-05 03:44:27.483917 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-05 03:44:27.483929 | orchestrator | 2026-04-05 03:44:27.483942 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-05 03:44:27.483954 | orchestrator | Sunday 05 April 2026 03:43:57 +0000 (0:00:06.643) 0:00:11.922 ********** 2026-04-05 03:44:27.483967 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:44:27.483980 | orchestrator | 2026-04-05 03:44:27.483992 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-05 03:44:27.484005 | orchestrator | Sunday 05 April 2026 03:44:01 +0000 (0:00:03.384) 0:00:15.306 ********** 2026-04-05 03:44:27.484017 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:44:27.484030 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-05 03:44:27.484042 | orchestrator | 2026-04-05 03:44:27.484055 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-05 03:44:27.484068 | orchestrator | Sunday 05 April 2026 03:44:05 +0000 (0:00:04.312) 0:00:19.618 ********** 2026-04-05 03:44:27.484081 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:44:27.484094 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-05 03:44:27.484108 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-05 03:44:27.484135 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-05 03:44:27.484148 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-05 03:44:27.484160 | orchestrator | 2026-04-05 03:44:27.484172 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-05 03:44:27.484184 | orchestrator | Sunday 05 April 2026 03:44:21 +0000 (0:00:16.192) 0:00:35.811 ********** 2026-04-05 03:44:27.484218 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-05 03:44:27.484230 | orchestrator | 2026-04-05 03:44:27.484243 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-05 03:44:27.484295 | orchestrator | Sunday 05 April 2026 03:44:25 +0000 (0:00:03.926) 0:00:39.737 ********** 2026-04-05 03:44:27.484315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:27.484349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:27.484362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:27.484375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:27.484394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:27.484414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:27.484435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:33.594847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:33.594955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:33.594973 | orchestrator | 2026-04-05 03:44:33.594988 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-05 03:44:33.595000 | orchestrator | Sunday 05 April 2026 03:44:27 +0000 (0:00:01.658) 0:00:41.396 ********** 2026-04-05 03:44:33.595013 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-05 03:44:33.595024 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-05 03:44:33.595035 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-05 03:44:33.595046 | orchestrator | 2026-04-05 03:44:33.595057 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-05 03:44:33.595068 | orchestrator | Sunday 05 April 2026 03:44:28 +0000 (0:00:01.249) 0:00:42.646 ********** 2026-04-05 03:44:33.595080 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:44:33.595092 | orchestrator | 2026-04-05 03:44:33.595103 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-05 03:44:33.595139 | orchestrator | Sunday 05 April 2026 03:44:29 +0000 (0:00:00.346) 0:00:42.993 ********** 2026-04-05 03:44:33.595151 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:44:33.595162 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:44:33.595173 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:44:33.595184 | orchestrator | 2026-04-05 03:44:33.595194 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 03:44:33.595205 | orchestrator | Sunday 05 April 2026 03:44:29 +0000 (0:00:00.354) 0:00:43.347 ********** 2026-04-05 03:44:33.595231 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:44:33.595298 | orchestrator | 2026-04-05 03:44:33.595311 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-05 03:44:33.595322 | orchestrator | Sunday 05 April 2026 03:44:29 +0000 (0:00:00.566) 0:00:43.913 ********** 2026-04-05 03:44:33.595335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:33.595365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:33.595378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:33.595390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:33.595418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:33.595430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:33.595441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:33.595462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:35.039777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:35.039867 | orchestrator | 2026-04-05 03:44:35.039877 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-05 03:44:35.039886 | orchestrator | Sunday 05 April 2026 03:44:33 +0000 (0:00:03.587) 0:00:47.501 ********** 2026-04-05 03:44:35.039913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:35.039936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:35.039949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:35.039960 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:44:35.039971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:35.040001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:35.040012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:35.040031 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:44:35.040047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:35.040058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:35.040065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:35.040073 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:44:35.040080 | orchestrator | 2026-04-05 03:44:35.040088 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-05 03:44:35.040095 | orchestrator | Sunday 05 April 2026 03:44:34 +0000 (0:00:00.614) 0:00:48.116 ********** 2026-04-05 03:44:35.040113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:38.843611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:38.843712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:38.843729 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:44:38.843764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:38.843784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:38.843817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:38.843835 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:44:38.843876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:38.843925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:38.843955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:38.844017 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:44:38.844040 | orchestrator | 2026-04-05 03:44:38.844059 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-05 03:44:38.844080 | orchestrator | Sunday 05 April 2026 03:44:35 +0000 (0:00:00.840) 0:00:48.956 ********** 2026-04-05 03:44:38.844100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:38.844115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:38.844151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:48.721637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:48.721797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:48.721829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:48.721850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:48.721873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:48.721925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:48.721947 | orchestrator | 2026-04-05 03:44:48.721967 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-05 03:44:48.721987 | orchestrator | Sunday 05 April 2026 03:44:38 +0000 (0:00:03.801) 0:00:52.758 ********** 2026-04-05 03:44:48.722006 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:44:48.722108 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:44:48.722129 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:44:48.722149 | orchestrator | 2026-04-05 03:44:48.722227 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-05 03:44:48.722276 | orchestrator | Sunday 05 April 2026 03:44:40 +0000 (0:00:01.589) 0:00:54.348 ********** 2026-04-05 03:44:48.722290 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:44:48.722303 | orchestrator | 2026-04-05 03:44:48.722315 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-05 03:44:48.722328 | orchestrator | Sunday 05 April 2026 03:44:41 +0000 (0:00:00.977) 0:00:55.325 ********** 2026-04-05 03:44:48.722340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:44:48.722353 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:44:48.722366 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:44:48.722378 | orchestrator | 2026-04-05 03:44:48.722390 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-05 03:44:48.722403 | orchestrator | Sunday 05 April 2026 03:44:41 +0000 (0:00:00.588) 0:00:55.913 ********** 2026-04-05 03:44:48.722449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:48.722466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:48.722494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:48.722515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:49.615901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:49.616037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:49.616063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:49.616096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:49.616124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:49.616134 | orchestrator | 2026-04-05 03:44:49.616154 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-05 03:44:49.616165 | orchestrator | Sunday 05 April 2026 03:44:48 +0000 (0:00:06.726) 0:01:02.640 ********** 2026-04-05 03:44:49.616193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:49.616203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:49.616218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:49.616228 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:44:49.616275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:49.616297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:49.616306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:49.616315 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:44:49.616333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-05 03:44:52.143889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:44:52.143980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:44:52.144014 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:44:52.144023 | orchestrator | 2026-04-05 03:44:52.144031 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-05 03:44:52.144040 | orchestrator | Sunday 05 April 2026 03:44:49 +0000 (0:00:00.889) 0:01:03.529 ********** 2026-04-05 03:44:52.144048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:52.144056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:52.144082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-05 03:44:52.144094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:52.144108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:52.144116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:52.144124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:52.144132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:52.144139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:44:52.144146 | orchestrator | 2026-04-05 03:44:52.144153 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 03:44:52.144166 | orchestrator | Sunday 05 April 2026 03:44:52 +0000 (0:00:02.527) 0:01:06.056 ********** 2026-04-05 03:45:34.009729 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:45:34.009848 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:45:34.009866 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:45:34.009882 | orchestrator | 2026-04-05 03:45:34.009916 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-05 03:45:34.009959 | orchestrator | Sunday 05 April 2026 03:44:52 +0000 (0:00:00.316) 0:01:06.373 ********** 2026-04-05 03:45:34.009975 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:45:34.009990 | orchestrator | 2026-04-05 03:45:34.010005 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-05 03:45:34.010089 | orchestrator | Sunday 05 April 2026 03:44:54 +0000 (0:00:02.107) 0:01:08.480 ********** 2026-04-05 03:45:34.010107 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:45:34.010121 | orchestrator | 2026-04-05 03:45:34.010137 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-05 03:45:34.010151 | orchestrator | Sunday 05 April 2026 03:44:56 +0000 (0:00:02.284) 0:01:10.764 ********** 2026-04-05 03:45:34.010165 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:45:34.010179 | orchestrator | 2026-04-05 03:45:34.010192 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 03:45:34.010230 | orchestrator | Sunday 05 April 2026 03:45:09 +0000 (0:00:12.999) 0:01:23.764 ********** 2026-04-05 03:45:34.010245 | orchestrator | 2026-04-05 03:45:34.010258 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 03:45:34.010272 | orchestrator | Sunday 05 April 2026 03:45:09 +0000 (0:00:00.072) 0:01:23.836 ********** 2026-04-05 03:45:34.010286 | orchestrator | 2026-04-05 03:45:34.010301 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 03:45:34.010316 | orchestrator | Sunday 05 April 2026 03:45:09 +0000 (0:00:00.071) 0:01:23.907 ********** 2026-04-05 03:45:34.010330 | orchestrator | 2026-04-05 03:45:34.010344 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-05 03:45:34.010359 | orchestrator | Sunday 05 April 2026 03:45:10 +0000 (0:00:00.075) 0:01:23.983 ********** 2026-04-05 03:45:34.010371 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:45:34.010385 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:45:34.010399 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:45:34.010413 | orchestrator | 2026-04-05 03:45:34.010427 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-05 03:45:34.010439 | orchestrator | Sunday 05 April 2026 03:45:17 +0000 (0:00:07.908) 0:01:31.891 ********** 2026-04-05 03:45:34.010453 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:45:34.010467 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:45:34.010481 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:45:34.010497 | orchestrator | 2026-04-05 03:45:34.010511 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-05 03:45:34.010525 | orchestrator | Sunday 05 April 2026 03:45:23 +0000 (0:00:05.218) 0:01:37.109 ********** 2026-04-05 03:45:34.010537 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:45:34.010551 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:45:34.010565 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:45:34.010578 | orchestrator | 2026-04-05 03:45:34.010593 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:45:34.010607 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 03:45:34.010622 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 03:45:34.010635 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 03:45:34.010647 | orchestrator | 2026-04-05 03:45:34.010660 | orchestrator | 2026-04-05 03:45:34.010674 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:45:34.010687 | orchestrator | Sunday 05 April 2026 03:45:33 +0000 (0:00:10.426) 0:01:47.536 ********** 2026-04-05 03:45:34.010700 | orchestrator | =============================================================================== 2026-04-05 03:45:34.010713 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.19s 2026-04-05 03:45:34.010743 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.00s 2026-04-05 03:45:34.010758 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.43s 2026-04-05 03:45:34.010770 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.91s 2026-04-05 03:45:34.010783 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.73s 2026-04-05 03:45:34.010796 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.64s 2026-04-05 03:45:34.010809 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.22s 2026-04-05 03:45:34.010822 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.31s 2026-04-05 03:45:34.010836 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.93s 2026-04-05 03:45:34.010849 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.80s 2026-04-05 03:45:34.010862 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.59s 2026-04-05 03:45:34.010876 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.54s 2026-04-05 03:45:34.010889 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.38s 2026-04-05 03:45:34.010903 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.53s 2026-04-05 03:45:34.010917 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.28s 2026-04-05 03:45:34.010952 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2026-04-05 03:45:34.010961 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.66s 2026-04-05 03:45:34.010978 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.59s 2026-04-05 03:45:34.010986 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.25s 2026-04-05 03:45:34.010994 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.98s 2026-04-05 03:45:36.612041 | orchestrator | 2026-04-05 03:45:36 | INFO  | Task 5ea33b3e-ad83-4c3f-9c55-6b8fec73161e (designate) was prepared for execution. 2026-04-05 03:45:36.612123 | orchestrator | 2026-04-05 03:45:36 | INFO  | It takes a moment until task 5ea33b3e-ad83-4c3f-9c55-6b8fec73161e (designate) has been started and output is visible here. 2026-04-05 03:46:09.782709 | orchestrator | 2026-04-05 03:46:09.782827 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:46:09.782844 | orchestrator | 2026-04-05 03:46:09.782857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:46:09.782868 | orchestrator | Sunday 05 April 2026 03:45:41 +0000 (0:00:00.278) 0:00:00.279 ********** 2026-04-05 03:46:09.782880 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:46:09.782892 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:46:09.782903 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:46:09.782914 | orchestrator | 2026-04-05 03:46:09.782925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:46:09.782936 | orchestrator | Sunday 05 April 2026 03:45:41 +0000 (0:00:00.360) 0:00:00.639 ********** 2026-04-05 03:46:09.782948 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-05 03:46:09.782960 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-05 03:46:09.782971 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-05 03:46:09.782982 | orchestrator | 2026-04-05 03:46:09.782994 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-05 03:46:09.783005 | orchestrator | 2026-04-05 03:46:09.783016 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 03:46:09.783027 | orchestrator | Sunday 05 April 2026 03:45:42 +0000 (0:00:00.483) 0:00:01.123 ********** 2026-04-05 03:46:09.783039 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:46:09.783077 | orchestrator | 2026-04-05 03:46:09.783089 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-05 03:46:09.783100 | orchestrator | Sunday 05 April 2026 03:45:42 +0000 (0:00:00.634) 0:00:01.757 ********** 2026-04-05 03:46:09.783111 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-05 03:46:09.783122 | orchestrator | 2026-04-05 03:46:09.783133 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-05 03:46:09.783144 | orchestrator | Sunday 05 April 2026 03:45:46 +0000 (0:00:03.530) 0:00:05.288 ********** 2026-04-05 03:46:09.783155 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-05 03:46:09.783166 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-05 03:46:09.783177 | orchestrator | 2026-04-05 03:46:09.783215 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-05 03:46:09.783227 | orchestrator | Sunday 05 April 2026 03:45:52 +0000 (0:00:06.694) 0:00:11.983 ********** 2026-04-05 03:46:09.783238 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:46:09.783252 | orchestrator | 2026-04-05 03:46:09.783265 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-05 03:46:09.783279 | orchestrator | Sunday 05 April 2026 03:45:56 +0000 (0:00:03.308) 0:00:15.292 ********** 2026-04-05 03:46:09.783292 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:46:09.783305 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-05 03:46:09.783318 | orchestrator | 2026-04-05 03:46:09.783331 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-05 03:46:09.783344 | orchestrator | Sunday 05 April 2026 03:46:00 +0000 (0:00:04.302) 0:00:19.594 ********** 2026-04-05 03:46:09.783357 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:46:09.783370 | orchestrator | 2026-04-05 03:46:09.783383 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-05 03:46:09.783396 | orchestrator | Sunday 05 April 2026 03:46:03 +0000 (0:00:03.281) 0:00:22.876 ********** 2026-04-05 03:46:09.783409 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-05 03:46:09.783422 | orchestrator | 2026-04-05 03:46:09.783434 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-05 03:46:09.783447 | orchestrator | Sunday 05 April 2026 03:46:07 +0000 (0:00:03.885) 0:00:26.762 ********** 2026-04-05 03:46:09.783478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:09.783516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:09.783541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:09.783556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:09.783571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:09.783585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:09.783605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:09.783626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:16.686486 | orchestrator | 2026-04-05 03:46:16.686498 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-05 03:46:16.686509 | orchestrator | Sunday 05 April 2026 03:46:10 +0000 (0:00:02.903) 0:00:29.665 ********** 2026-04-05 03:46:16.686519 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:46:16.686530 | orchestrator | 2026-04-05 03:46:16.686540 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-05 03:46:16.686550 | orchestrator | Sunday 05 April 2026 03:46:10 +0000 (0:00:00.130) 0:00:29.796 ********** 2026-04-05 03:46:16.686559 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:46:16.686569 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:46:16.686580 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:46:16.686590 | orchestrator | 2026-04-05 03:46:16.686600 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 03:46:16.686609 | orchestrator | Sunday 05 April 2026 03:46:11 +0000 (0:00:00.564) 0:00:30.361 ********** 2026-04-05 03:46:16.686620 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:46:16.686630 | orchestrator | 2026-04-05 03:46:16.686639 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-05 03:46:16.686657 | orchestrator | Sunday 05 April 2026 03:46:11 +0000 (0:00:00.598) 0:00:30.959 ********** 2026-04-05 03:46:16.686674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:16.686693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:18.510267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:18.511323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:18.511521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:19.446399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:19.446517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:19.446532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:19.446568 | orchestrator | 2026-04-05 03:46:19.446581 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-05 03:46:19.446592 | orchestrator | Sunday 05 April 2026 03:46:18 +0000 (0:00:06.614) 0:00:37.574 ********** 2026-04-05 03:46:19.446620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:19.446632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:19.446661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:19.446672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:19.446682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:19.446693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:19.446710 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:46:19.446727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:19.446737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:19.446747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:19.446764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.275804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.275907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.275921 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:46:20.275942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:20.275950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:20.275957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.275963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.275981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.275993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.275999 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:46:20.276004 | orchestrator | 2026-04-05 03:46:20.276010 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-05 03:46:20.276019 | orchestrator | Sunday 05 April 2026 03:46:19 +0000 (0:00:01.065) 0:00:38.639 ********** 2026-04-05 03:46:20.276034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:20.276040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:20.276045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.276055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.653979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654316 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:46:20.654344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:20.654364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:20.654386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654510 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:46:20.654539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:20.654556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:20.654570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:20.654613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:25.239005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:25.239216 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:46:25.239238 | orchestrator | 2026-04-05 03:46:25.239250 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-05 03:46:25.239262 | orchestrator | Sunday 05 April 2026 03:46:20 +0000 (0:00:01.077) 0:00:39.717 ********** 2026-04-05 03:46:25.239289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:25.239302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:25.239312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:25.239361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:25.239374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:25.239390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:25.239400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:25.239412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:25.239423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:25.239440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:25.239459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406599 | orchestrator | 2026-04-05 03:46:37.406613 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-05 03:46:37.406626 | orchestrator | Sunday 05 April 2026 03:46:27 +0000 (0:00:06.479) 0:00:46.196 ********** 2026-04-05 03:46:37.406644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:37.406658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:37.406678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:37.406691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:37.406714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:46.344509 | orchestrator | 2026-04-05 03:46:46.344526 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-05 03:46:46.344544 | orchestrator | Sunday 05 April 2026 03:46:42 +0000 (0:00:15.135) 0:01:01.331 ********** 2026-04-05 03:46:46.344569 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 03:46:50.882773 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 03:46:50.882862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 03:46:50.882872 | orchestrator | 2026-04-05 03:46:50.882881 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-05 03:46:50.882890 | orchestrator | Sunday 05 April 2026 03:46:46 +0000 (0:00:04.069) 0:01:05.401 ********** 2026-04-05 03:46:50.882897 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 03:46:50.882905 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 03:46:50.882912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 03:46:50.882920 | orchestrator | 2026-04-05 03:46:50.882927 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-05 03:46:50.882948 | orchestrator | Sunday 05 April 2026 03:46:48 +0000 (0:00:02.615) 0:01:08.017 ********** 2026-04-05 03:46:50.882960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:50.882990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:50.882998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:50.883020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:50.883029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:50.883042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:50.883057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:50.883065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:50.883072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:50.883080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:50.883093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:53.941765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:53.941942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:53.941976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:53.941995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:53.942095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:53.942116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:53.942152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:53.942212 | orchestrator | 2026-04-05 03:46:53.942228 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-05 03:46:53.942242 | orchestrator | Sunday 05 April 2026 03:46:52 +0000 (0:00:03.066) 0:01:11.083 ********** 2026-04-05 03:46:53.942266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:53.942282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:53.942295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:53.942309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:53.942331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:55.002646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:55.002732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:55.002763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:55.002774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:55.002785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:46:55.002803 | orchestrator | 2026-04-05 03:46:55.002815 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 03:46:55.002833 | orchestrator | Sunday 05 April 2026 03:46:54 +0000 (0:00:02.977) 0:01:14.061 ********** 2026-04-05 03:46:56.057728 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:46:56.057806 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:46:56.057816 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:46:56.057824 | orchestrator | 2026-04-05 03:46:56.057832 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-05 03:46:56.057840 | orchestrator | Sunday 05 April 2026 03:46:55 +0000 (0:00:00.350) 0:01:14.411 ********** 2026-04-05 03:46:56.057864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:56.057875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:56.057884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:56.057892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:56.057900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:56.057940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:56.057948 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:46:56.057960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:56.057972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:56.057982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:56.057997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:56.058074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:56.058096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:59.549484 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:46:59.549634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-05 03:46:59.549673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 03:46:59.549694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 03:46:59.549714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 03:46:59.549765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 03:46:59.549787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:46:59.549807 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:46:59.549827 | orchestrator | 2026-04-05 03:46:59.549866 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-05 03:46:59.549886 | orchestrator | Sunday 05 April 2026 03:46:56 +0000 (0:00:00.830) 0:01:15.242 ********** 2026-04-05 03:46:59.549915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:59.549936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:59.549956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-05 03:46:59.549989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:46:59.550014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.582985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:47:01.583003 | orchestrator | 2026-04-05 03:47:01.583021 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 03:47:01.583040 | orchestrator | Sunday 05 April 2026 03:47:00 +0000 (0:00:04.796) 0:01:20.038 ********** 2026-04-05 03:47:01.583057 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:47:01.583083 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:48:34.098559 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:48:34.098649 | orchestrator | 2026-04-05 03:48:34.098660 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-05 03:48:34.098682 | orchestrator | Sunday 05 April 2026 03:47:01 +0000 (0:00:00.609) 0:01:20.648 ********** 2026-04-05 03:48:34.098690 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-05 03:48:34.098696 | orchestrator | 2026-04-05 03:48:34.098703 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-05 03:48:34.098709 | orchestrator | Sunday 05 April 2026 03:47:03 +0000 (0:00:02.184) 0:01:22.832 ********** 2026-04-05 03:48:34.098716 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-05 03:48:34.098723 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-05 03:48:34.098730 | orchestrator | 2026-04-05 03:48:34.098736 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-05 03:48:34.098742 | orchestrator | Sunday 05 April 2026 03:47:06 +0000 (0:00:02.379) 0:01:25.211 ********** 2026-04-05 03:48:34.098749 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.098755 | orchestrator | 2026-04-05 03:48:34.098761 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 03:48:34.098768 | orchestrator | Sunday 05 April 2026 03:47:23 +0000 (0:00:16.896) 0:01:42.107 ********** 2026-04-05 03:48:34.098774 | orchestrator | 2026-04-05 03:48:34.098781 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 03:48:34.098787 | orchestrator | Sunday 05 April 2026 03:47:23 +0000 (0:00:00.078) 0:01:42.186 ********** 2026-04-05 03:48:34.098793 | orchestrator | 2026-04-05 03:48:34.098817 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 03:48:34.098824 | orchestrator | Sunday 05 April 2026 03:47:23 +0000 (0:00:00.075) 0:01:42.261 ********** 2026-04-05 03:48:34.098830 | orchestrator | 2026-04-05 03:48:34.098837 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-05 03:48:34.098843 | orchestrator | Sunday 05 April 2026 03:47:23 +0000 (0:00:00.082) 0:01:42.344 ********** 2026-04-05 03:48:34.098850 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:48:34.098856 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.098863 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:48:34.098869 | orchestrator | 2026-04-05 03:48:34.098875 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-05 03:48:34.098881 | orchestrator | Sunday 05 April 2026 03:47:36 +0000 (0:00:13.029) 0:01:55.373 ********** 2026-04-05 03:48:34.098888 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:48:34.098894 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.098900 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:48:34.098906 | orchestrator | 2026-04-05 03:48:34.098912 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-05 03:48:34.098918 | orchestrator | Sunday 05 April 2026 03:47:46 +0000 (0:00:10.542) 0:02:05.915 ********** 2026-04-05 03:48:34.098925 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.098931 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:48:34.098937 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:48:34.098943 | orchestrator | 2026-04-05 03:48:34.098949 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-05 03:48:34.098955 | orchestrator | Sunday 05 April 2026 03:47:57 +0000 (0:00:10.634) 0:02:16.550 ********** 2026-04-05 03:48:34.098962 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.098968 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:48:34.098974 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:48:34.098980 | orchestrator | 2026-04-05 03:48:34.098986 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-05 03:48:34.098992 | orchestrator | Sunday 05 April 2026 03:48:08 +0000 (0:00:11.055) 0:02:27.605 ********** 2026-04-05 03:48:34.098999 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.099005 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:48:34.099011 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:48:34.099017 | orchestrator | 2026-04-05 03:48:34.099023 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-05 03:48:34.099030 | orchestrator | Sunday 05 April 2026 03:48:19 +0000 (0:00:11.367) 0:02:38.973 ********** 2026-04-05 03:48:34.099036 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.099042 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:48:34.099048 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:48:34.099054 | orchestrator | 2026-04-05 03:48:34.099061 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-05 03:48:34.099067 | orchestrator | Sunday 05 April 2026 03:48:25 +0000 (0:00:06.079) 0:02:45.052 ********** 2026-04-05 03:48:34.099073 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:48:34.099079 | orchestrator | 2026-04-05 03:48:34.099085 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:48:34.099093 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 03:48:34.099101 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 03:48:34.099178 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 03:48:34.099188 | orchestrator | 2026-04-05 03:48:34.099232 | orchestrator | 2026-04-05 03:48:34.099241 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:48:34.099255 | orchestrator | Sunday 05 April 2026 03:48:33 +0000 (0:00:07.674) 0:02:52.726 ********** 2026-04-05 03:48:34.099263 | orchestrator | =============================================================================== 2026-04-05 03:48:34.099270 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.90s 2026-04-05 03:48:34.099277 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.14s 2026-04-05 03:48:34.099299 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.03s 2026-04-05 03:48:34.099306 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.37s 2026-04-05 03:48:34.099317 | orchestrator | designate : Restart designate-producer container ----------------------- 11.06s 2026-04-05 03:48:34.099323 | orchestrator | designate : Restart designate-central container ------------------------ 10.63s 2026-04-05 03:48:34.099330 | orchestrator | designate : Restart designate-api container ---------------------------- 10.54s 2026-04-05 03:48:34.099336 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.67s 2026-04-05 03:48:34.099342 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.70s 2026-04-05 03:48:34.099348 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.61s 2026-04-05 03:48:34.099355 | orchestrator | designate : Copying over config.json files for services ----------------- 6.48s 2026-04-05 03:48:34.099361 | orchestrator | designate : Restart designate-worker container -------------------------- 6.08s 2026-04-05 03:48:34.099367 | orchestrator | designate : Check designate containers ---------------------------------- 4.80s 2026-04-05 03:48:34.099373 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.30s 2026-04-05 03:48:34.099379 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.07s 2026-04-05 03:48:34.099385 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.89s 2026-04-05 03:48:34.099392 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.53s 2026-04-05 03:48:34.099398 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.31s 2026-04-05 03:48:34.099404 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.28s 2026-04-05 03:48:34.099410 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.07s 2026-04-05 03:48:36.693167 | orchestrator | 2026-04-05 03:48:36 | INFO  | Task 0e0c8f4b-38b4-4bbf-b25b-8fe370586618 (octavia) was prepared for execution. 2026-04-05 03:48:36.693257 | orchestrator | 2026-04-05 03:48:36 | INFO  | It takes a moment until task 0e0c8f4b-38b4-4bbf-b25b-8fe370586618 (octavia) has been started and output is visible here. 2026-04-05 03:50:50.825988 | orchestrator | 2026-04-05 03:50:50.826248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:50:50.826269 | orchestrator | 2026-04-05 03:50:50.826280 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:50:50.826292 | orchestrator | Sunday 05 April 2026 03:48:41 +0000 (0:00:00.281) 0:00:00.281 ********** 2026-04-05 03:50:50.826302 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:50:50.826314 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:50:50.826324 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:50:50.826334 | orchestrator | 2026-04-05 03:50:50.826344 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:50:50.826355 | orchestrator | Sunday 05 April 2026 03:48:41 +0000 (0:00:00.406) 0:00:00.688 ********** 2026-04-05 03:50:50.826365 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-05 03:50:50.826375 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-05 03:50:50.826384 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-05 03:50:50.826394 | orchestrator | 2026-04-05 03:50:50.826405 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-05 03:50:50.826415 | orchestrator | 2026-04-05 03:50:50.826425 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 03:50:50.826458 | orchestrator | Sunday 05 April 2026 03:48:42 +0000 (0:00:00.481) 0:00:01.170 ********** 2026-04-05 03:50:50.826469 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:50:50.826479 | orchestrator | 2026-04-05 03:50:50.826489 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-05 03:50:50.826498 | orchestrator | Sunday 05 April 2026 03:48:42 +0000 (0:00:00.707) 0:00:01.877 ********** 2026-04-05 03:50:50.826509 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-05 03:50:50.826519 | orchestrator | 2026-04-05 03:50:50.826534 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-05 03:50:50.826555 | orchestrator | Sunday 05 April 2026 03:48:46 +0000 (0:00:03.531) 0:00:05.409 ********** 2026-04-05 03:50:50.826581 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-05 03:50:50.826598 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-05 03:50:50.826614 | orchestrator | 2026-04-05 03:50:50.826630 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-05 03:50:50.826646 | orchestrator | Sunday 05 April 2026 03:48:53 +0000 (0:00:07.072) 0:00:12.481 ********** 2026-04-05 03:50:50.826661 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:50:50.826676 | orchestrator | 2026-04-05 03:50:50.826691 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-05 03:50:50.826709 | orchestrator | Sunday 05 April 2026 03:48:57 +0000 (0:00:03.655) 0:00:16.137 ********** 2026-04-05 03:50:50.826725 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:50:50.826742 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-05 03:50:50.826759 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-05 03:50:50.826774 | orchestrator | 2026-04-05 03:50:50.826785 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-05 03:50:50.826797 | orchestrator | Sunday 05 April 2026 03:49:05 +0000 (0:00:08.831) 0:00:24.968 ********** 2026-04-05 03:50:50.826808 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:50:50.826819 | orchestrator | 2026-04-05 03:50:50.826831 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-05 03:50:50.826855 | orchestrator | Sunday 05 April 2026 03:49:09 +0000 (0:00:03.361) 0:00:28.329 ********** 2026-04-05 03:50:50.826868 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-05 03:50:50.826879 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-05 03:50:50.826891 | orchestrator | 2026-04-05 03:50:50.826902 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-05 03:50:50.826912 | orchestrator | Sunday 05 April 2026 03:49:16 +0000 (0:00:07.564) 0:00:35.894 ********** 2026-04-05 03:50:50.826921 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-05 03:50:50.826931 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-05 03:50:50.826940 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-05 03:50:50.826950 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-05 03:50:50.826960 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-05 03:50:50.826969 | orchestrator | 2026-04-05 03:50:50.826979 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 03:50:50.826988 | orchestrator | Sunday 05 April 2026 03:49:33 +0000 (0:00:16.739) 0:00:52.634 ********** 2026-04-05 03:50:50.827003 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:50:50.827023 | orchestrator | 2026-04-05 03:50:50.827066 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-05 03:50:50.827096 | orchestrator | Sunday 05 April 2026 03:49:34 +0000 (0:00:00.872) 0:00:53.506 ********** 2026-04-05 03:50:50.827113 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827130 | orchestrator | 2026-04-05 03:50:50.827144 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-05 03:50:50.827158 | orchestrator | Sunday 05 April 2026 03:49:39 +0000 (0:00:05.166) 0:00:58.673 ********** 2026-04-05 03:50:50.827173 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827188 | orchestrator | 2026-04-05 03:50:50.827204 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-05 03:50:50.827242 | orchestrator | Sunday 05 April 2026 03:49:44 +0000 (0:00:05.251) 0:01:03.924 ********** 2026-04-05 03:50:50.827260 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:50:50.827271 | orchestrator | 2026-04-05 03:50:50.827281 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-05 03:50:50.827290 | orchestrator | Sunday 05 April 2026 03:49:48 +0000 (0:00:03.239) 0:01:07.164 ********** 2026-04-05 03:50:50.827300 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-05 03:50:50.827310 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-05 03:50:50.827320 | orchestrator | 2026-04-05 03:50:50.827330 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-05 03:50:50.827339 | orchestrator | Sunday 05 April 2026 03:49:59 +0000 (0:00:11.198) 0:01:18.362 ********** 2026-04-05 03:50:50.827349 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-05 03:50:50.827359 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-05 03:50:50.827370 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-05 03:50:50.827381 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-05 03:50:50.827391 | orchestrator | 2026-04-05 03:50:50.827401 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-05 03:50:50.827410 | orchestrator | Sunday 05 April 2026 03:50:15 +0000 (0:00:16.625) 0:01:34.987 ********** 2026-04-05 03:50:50.827425 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827434 | orchestrator | 2026-04-05 03:50:50.827444 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-05 03:50:50.827454 | orchestrator | Sunday 05 April 2026 03:50:20 +0000 (0:00:04.876) 0:01:39.864 ********** 2026-04-05 03:50:50.827464 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827473 | orchestrator | 2026-04-05 03:50:50.827483 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-05 03:50:50.827492 | orchestrator | Sunday 05 April 2026 03:50:26 +0000 (0:00:05.428) 0:01:45.293 ********** 2026-04-05 03:50:50.827502 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:50:50.827511 | orchestrator | 2026-04-05 03:50:50.827521 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-05 03:50:50.827531 | orchestrator | Sunday 05 April 2026 03:50:26 +0000 (0:00:00.248) 0:01:45.542 ********** 2026-04-05 03:50:50.827541 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:50:50.827550 | orchestrator | 2026-04-05 03:50:50.827560 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 03:50:50.827570 | orchestrator | Sunday 05 April 2026 03:50:31 +0000 (0:00:04.864) 0:01:50.406 ********** 2026-04-05 03:50:50.827580 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:50:50.827589 | orchestrator | 2026-04-05 03:50:50.827599 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-05 03:50:50.827609 | orchestrator | Sunday 05 April 2026 03:50:32 +0000 (0:00:01.199) 0:01:51.605 ********** 2026-04-05 03:50:50.827627 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827637 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:50:50.827646 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:50:50.827656 | orchestrator | 2026-04-05 03:50:50.827665 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-05 03:50:50.827682 | orchestrator | Sunday 05 April 2026 03:50:37 +0000 (0:00:05.407) 0:01:57.013 ********** 2026-04-05 03:50:50.827692 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:50:50.827702 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827712 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:50:50.827721 | orchestrator | 2026-04-05 03:50:50.827731 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-05 03:50:50.827741 | orchestrator | Sunday 05 April 2026 03:50:42 +0000 (0:00:04.622) 0:02:01.635 ********** 2026-04-05 03:50:50.827751 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827768 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:50:50.827793 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:50:50.827811 | orchestrator | 2026-04-05 03:50:50.827827 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-05 03:50:50.827842 | orchestrator | Sunday 05 April 2026 03:50:43 +0000 (0:00:01.115) 0:02:02.751 ********** 2026-04-05 03:50:50.827858 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:50:50.827873 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:50:50.827889 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:50:50.827904 | orchestrator | 2026-04-05 03:50:50.827918 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-05 03:50:50.827933 | orchestrator | Sunday 05 April 2026 03:50:46 +0000 (0:00:02.358) 0:02:05.109 ********** 2026-04-05 03:50:50.827948 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:50:50.827963 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:50:50.827976 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.827991 | orchestrator | 2026-04-05 03:50:50.828006 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-05 03:50:50.828021 | orchestrator | Sunday 05 April 2026 03:50:47 +0000 (0:00:01.253) 0:02:06.362 ********** 2026-04-05 03:50:50.828059 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.828075 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:50:50.828090 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:50:50.828104 | orchestrator | 2026-04-05 03:50:50.828118 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-05 03:50:50.828132 | orchestrator | Sunday 05 April 2026 03:50:48 +0000 (0:00:01.224) 0:02:07.587 ********** 2026-04-05 03:50:50.828147 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:50:50.828161 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:50:50.828175 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:50:50.828189 | orchestrator | 2026-04-05 03:50:50.828216 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-05 03:51:19.018802 | orchestrator | Sunday 05 April 2026 03:50:50 +0000 (0:00:02.273) 0:02:09.860 ********** 2026-04-05 03:51:19.018950 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:51:19.018968 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:51:19.018981 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:51:19.018992 | orchestrator | 2026-04-05 03:51:19.019004 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-05 03:51:19.019063 | orchestrator | Sunday 05 April 2026 03:50:52 +0000 (0:00:01.578) 0:02:11.439 ********** 2026-04-05 03:51:19.019078 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:51:19.019090 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:51:19.019101 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:51:19.019112 | orchestrator | 2026-04-05 03:51:19.019123 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-05 03:51:19.019134 | orchestrator | Sunday 05 April 2026 03:50:53 +0000 (0:00:00.729) 0:02:12.169 ********** 2026-04-05 03:51:19.019145 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:51:19.019156 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:51:19.019193 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:51:19.019204 | orchestrator | 2026-04-05 03:51:19.019215 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 03:51:19.019226 | orchestrator | Sunday 05 April 2026 03:50:56 +0000 (0:00:03.082) 0:02:15.252 ********** 2026-04-05 03:51:19.019238 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:51:19.019249 | orchestrator | 2026-04-05 03:51:19.019259 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-05 03:51:19.019270 | orchestrator | Sunday 05 April 2026 03:50:56 +0000 (0:00:00.628) 0:02:15.880 ********** 2026-04-05 03:51:19.019281 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:51:19.019292 | orchestrator | 2026-04-05 03:51:19.019303 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-05 03:51:19.019316 | orchestrator | Sunday 05 April 2026 03:51:01 +0000 (0:00:04.330) 0:02:20.211 ********** 2026-04-05 03:51:19.019328 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:51:19.019341 | orchestrator | 2026-04-05 03:51:19.019353 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-05 03:51:19.019366 | orchestrator | Sunday 05 April 2026 03:51:04 +0000 (0:00:03.664) 0:02:23.875 ********** 2026-04-05 03:51:19.019380 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-05 03:51:19.019394 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-05 03:51:19.019412 | orchestrator | 2026-04-05 03:51:19.019431 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-05 03:51:19.019447 | orchestrator | Sunday 05 April 2026 03:51:12 +0000 (0:00:07.369) 0:02:31.245 ********** 2026-04-05 03:51:19.019460 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:51:19.019472 | orchestrator | 2026-04-05 03:51:19.019486 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-05 03:51:19.019499 | orchestrator | Sunday 05 April 2026 03:51:16 +0000 (0:00:04.192) 0:02:35.438 ********** 2026-04-05 03:51:19.019512 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:51:19.019526 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:51:19.019539 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:51:19.019550 | orchestrator | 2026-04-05 03:51:19.019561 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-05 03:51:19.019571 | orchestrator | Sunday 05 April 2026 03:51:16 +0000 (0:00:00.573) 0:02:36.012 ********** 2026-04-05 03:51:19.019602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:19.019637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:19.019659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:19.019671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:19.019684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:19.019700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:19.019712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:19.019726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:19.019753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:20.569196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:20.569288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:20.569299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:20.569322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:20.569331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:20.569359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:20.569368 | orchestrator | 2026-04-05 03:51:20.569377 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-05 03:51:20.569386 | orchestrator | Sunday 05 April 2026 03:51:19 +0000 (0:00:02.489) 0:02:38.502 ********** 2026-04-05 03:51:20.569394 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:51:20.569403 | orchestrator | 2026-04-05 03:51:20.569410 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-05 03:51:20.569418 | orchestrator | Sunday 05 April 2026 03:51:19 +0000 (0:00:00.145) 0:02:38.647 ********** 2026-04-05 03:51:20.569425 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:51:20.569458 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:51:20.569476 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:51:20.569488 | orchestrator | 2026-04-05 03:51:20.569499 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-05 03:51:20.569511 | orchestrator | Sunday 05 April 2026 03:51:19 +0000 (0:00:00.338) 0:02:38.985 ********** 2026-04-05 03:51:20.569524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:20.569538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:20.569557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:20.569571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:20.569594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:20.569608 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:51:20.569632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:25.627779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:25.627874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:25.627898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:25.627907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:25.627935 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:51:25.627945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:25.627954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:25.627974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:25.627981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:25.627992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:25.628004 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:51:25.628030 | orchestrator | 2026-04-05 03:51:25.628038 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 03:51:25.628046 | orchestrator | Sunday 05 April 2026 03:51:20 +0000 (0:00:00.737) 0:02:39.723 ********** 2026-04-05 03:51:25.628054 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:51:25.628061 | orchestrator | 2026-04-05 03:51:25.628068 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-05 03:51:25.628075 | orchestrator | Sunday 05 April 2026 03:51:21 +0000 (0:00:00.785) 0:02:40.508 ********** 2026-04-05 03:51:25.628083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:25.628091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:25.628103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:27.236438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:27.236615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:27.236645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:27.236665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:27.236868 | orchestrator | 2026-04-05 03:51:27.236889 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-05 03:51:27.236909 | orchestrator | Sunday 05 April 2026 03:51:26 +0000 (0:00:05.151) 0:02:45.660 ********** 2026-04-05 03:51:27.236941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:27.344192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:27.344316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:27.344335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:27.344347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:27.344361 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:51:27.344375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:27.344388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:27.344442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:27.344461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:27.344473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:27.344484 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:51:27.344496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:27.344507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:27.344518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:27.344548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:28.216854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:28.216961 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:51:28.216977 | orchestrator | 2026-04-05 03:51:28.216990 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-05 03:51:28.217003 | orchestrator | Sunday 05 April 2026 03:51:27 +0000 (0:00:00.730) 0:02:46.390 ********** 2026-04-05 03:51:28.217067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:28.217080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:28.217093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:28.217107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:28.217162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:28.217174 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:51:28.217193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:28.217205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:28.217217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:28.217228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:28.217247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:28.217259 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:51:28.217278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 03:51:32.888828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 03:51:32.888920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 03:51:32.888929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 03:51:32.888936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 03:51:32.888957 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:51:32.888964 | orchestrator | 2026-04-05 03:51:32.888971 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-05 03:51:32.888977 | orchestrator | Sunday 05 April 2026 03:51:28 +0000 (0:00:01.410) 0:02:47.801 ********** 2026-04-05 03:51:32.888983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:32.889046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:32.889054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:32.889060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:32.889066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:32.889076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:51:32.889082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:32.889094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:51:50.615750 | orchestrator | 2026-04-05 03:51:50.615759 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-05 03:51:50.615767 | orchestrator | Sunday 05 April 2026 03:51:33 +0000 (0:00:05.077) 0:02:52.879 ********** 2026-04-05 03:51:50.615774 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 03:51:50.615782 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 03:51:50.615789 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 03:51:50.615795 | orchestrator | 2026-04-05 03:51:50.615802 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-05 03:51:50.615809 | orchestrator | Sunday 05 April 2026 03:51:35 +0000 (0:00:01.754) 0:02:54.634 ********** 2026-04-05 03:51:50.615817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:50.615831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:50.615839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:51:50.615856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:52:07.358422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:52:07.358535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:52:07.358559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:52:07.358740 | orchestrator | 2026-04-05 03:52:07.358751 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-05 03:52:07.358761 | orchestrator | Sunday 05 April 2026 03:51:54 +0000 (0:00:18.698) 0:03:13.332 ********** 2026-04-05 03:52:07.358770 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:52:07.358780 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:52:07.358790 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:52:07.358799 | orchestrator | 2026-04-05 03:52:07.358807 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-05 03:52:07.358816 | orchestrator | Sunday 05 April 2026 03:51:56 +0000 (0:00:01.974) 0:03:15.307 ********** 2026-04-05 03:52:07.358825 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 03:52:07.358835 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 03:52:07.358843 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 03:52:07.358852 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 03:52:07.358861 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 03:52:07.358870 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 03:52:07.358878 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 03:52:07.358887 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 03:52:07.358896 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 03:52:07.358904 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 03:52:07.358913 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 03:52:07.358922 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 03:52:07.358930 | orchestrator | 2026-04-05 03:52:07.358939 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-05 03:52:07.358952 | orchestrator | Sunday 05 April 2026 03:52:01 +0000 (0:00:05.522) 0:03:20.829 ********** 2026-04-05 03:52:07.358962 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 03:52:07.358975 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 03:52:07.359019 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 03:52:16.786309 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 03:52:16.786399 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 03:52:16.786408 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 03:52:16.786414 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 03:52:16.786421 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 03:52:16.786427 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 03:52:16.786434 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 03:52:16.786440 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 03:52:16.786446 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 03:52:16.786452 | orchestrator | 2026-04-05 03:52:16.786460 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-05 03:52:16.786468 | orchestrator | Sunday 05 April 2026 03:52:07 +0000 (0:00:05.569) 0:03:26.399 ********** 2026-04-05 03:52:16.786475 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 03:52:16.786481 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 03:52:16.786488 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 03:52:16.786494 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 03:52:16.786501 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 03:52:16.786507 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 03:52:16.786513 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 03:52:16.786520 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 03:52:16.786526 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 03:52:16.786532 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 03:52:16.786539 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 03:52:16.786546 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 03:52:16.786552 | orchestrator | 2026-04-05 03:52:16.786558 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-05 03:52:16.786565 | orchestrator | Sunday 05 April 2026 03:52:13 +0000 (0:00:05.810) 0:03:32.210 ********** 2026-04-05 03:52:16.786575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:52:16.786585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:52:16.786645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 03:52:16.786654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:52:16.786661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:52:16.786668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 03:52:16.786675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:16.786682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:16.786697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 03:52:16.786708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:53:47.294466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:53:47.294570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 03:53:47.294584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:53:47.294595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:53:47.294627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 03:53:47.294637 | orchestrator | 2026-04-05 03:53:47.294648 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 03:53:47.294659 | orchestrator | Sunday 05 April 2026 03:52:17 +0000 (0:00:04.277) 0:03:36.488 ********** 2026-04-05 03:53:47.294668 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:53:47.294678 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:53:47.294686 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:53:47.294695 | orchestrator | 2026-04-05 03:53:47.294716 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-05 03:53:47.294725 | orchestrator | Sunday 05 April 2026 03:52:18 +0000 (0:00:00.750) 0:03:37.238 ********** 2026-04-05 03:53:47.294734 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.294743 | orchestrator | 2026-04-05 03:53:47.294752 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-05 03:53:47.294760 | orchestrator | Sunday 05 April 2026 03:52:20 +0000 (0:00:02.308) 0:03:39.547 ********** 2026-04-05 03:53:47.294769 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.294778 | orchestrator | 2026-04-05 03:53:47.294786 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-05 03:53:47.294795 | orchestrator | Sunday 05 April 2026 03:52:22 +0000 (0:00:02.243) 0:03:41.790 ********** 2026-04-05 03:53:47.294804 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.294812 | orchestrator | 2026-04-05 03:53:47.294821 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-05 03:53:47.294832 | orchestrator | Sunday 05 April 2026 03:52:25 +0000 (0:00:02.284) 0:03:44.074 ********** 2026-04-05 03:53:47.294855 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.294865 | orchestrator | 2026-04-05 03:53:47.294874 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-05 03:53:47.294883 | orchestrator | Sunday 05 April 2026 03:52:27 +0000 (0:00:02.355) 0:03:46.429 ********** 2026-04-05 03:53:47.294891 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.294970 | orchestrator | 2026-04-05 03:53:47.294981 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 03:53:47.294990 | orchestrator | Sunday 05 April 2026 03:52:50 +0000 (0:00:23.443) 0:04:09.873 ********** 2026-04-05 03:53:47.294999 | orchestrator | 2026-04-05 03:53:47.295009 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 03:53:47.295019 | orchestrator | Sunday 05 April 2026 03:52:50 +0000 (0:00:00.073) 0:04:09.947 ********** 2026-04-05 03:53:47.295029 | orchestrator | 2026-04-05 03:53:47.295040 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 03:53:47.295050 | orchestrator | Sunday 05 April 2026 03:52:50 +0000 (0:00:00.078) 0:04:10.025 ********** 2026-04-05 03:53:47.295060 | orchestrator | 2026-04-05 03:53:47.295071 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-05 03:53:47.295081 | orchestrator | Sunday 05 April 2026 03:52:51 +0000 (0:00:00.071) 0:04:10.097 ********** 2026-04-05 03:53:47.295091 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.295101 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:53:47.295112 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:53:47.295122 | orchestrator | 2026-04-05 03:53:47.295132 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-05 03:53:47.295142 | orchestrator | Sunday 05 April 2026 03:53:03 +0000 (0:00:12.400) 0:04:22.497 ********** 2026-04-05 03:53:47.295161 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:53:47.295172 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.295182 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:53:47.295192 | orchestrator | 2026-04-05 03:53:47.295202 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-05 03:53:47.295212 | orchestrator | Sunday 05 April 2026 03:53:15 +0000 (0:00:11.594) 0:04:34.091 ********** 2026-04-05 03:53:47.295223 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.295233 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:53:47.295243 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:53:47.295254 | orchestrator | 2026-04-05 03:53:47.295264 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-05 03:53:47.295274 | orchestrator | Sunday 05 April 2026 03:53:25 +0000 (0:00:10.544) 0:04:44.635 ********** 2026-04-05 03:53:47.295283 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.295292 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:53:47.295300 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:53:47.295309 | orchestrator | 2026-04-05 03:53:47.295317 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-05 03:53:47.295326 | orchestrator | Sunday 05 April 2026 03:53:36 +0000 (0:00:10.533) 0:04:55.169 ********** 2026-04-05 03:53:47.295335 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:53:47.295343 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:53:47.295352 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:53:47.295361 | orchestrator | 2026-04-05 03:53:47.295369 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:53:47.295380 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 03:53:47.295390 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 03:53:47.295398 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 03:53:47.295407 | orchestrator | 2026-04-05 03:53:47.295416 | orchestrator | 2026-04-05 03:53:47.295424 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:53:47.295433 | orchestrator | Sunday 05 April 2026 03:53:47 +0000 (0:00:11.156) 0:05:06.326 ********** 2026-04-05 03:53:47.295442 | orchestrator | =============================================================================== 2026-04-05 03:53:47.295451 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.44s 2026-04-05 03:53:47.295459 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.70s 2026-04-05 03:53:47.295468 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.74s 2026-04-05 03:53:47.295476 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.63s 2026-04-05 03:53:47.295485 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.40s 2026-04-05 03:53:47.295493 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.59s 2026-04-05 03:53:47.295507 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.20s 2026-04-05 03:53:47.295516 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.16s 2026-04-05 03:53:47.295525 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.54s 2026-04-05 03:53:47.295533 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.53s 2026-04-05 03:53:47.295542 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.83s 2026-04-05 03:53:47.295551 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.56s 2026-04-05 03:53:47.295559 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.37s 2026-04-05 03:53:47.295573 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.07s 2026-04-05 03:53:47.295589 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.81s 2026-04-05 03:53:47.791142 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.57s 2026-04-05 03:53:47.791216 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.52s 2026-04-05 03:53:47.791223 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.43s 2026-04-05 03:53:47.791227 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.41s 2026-04-05 03:53:47.791231 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.25s 2026-04-05 03:53:50.255186 | orchestrator | 2026-04-05 03:53:50 | INFO  | Task 1bd97177-235d-498e-a69a-bc6654db34e8 (ceilometer) was prepared for execution. 2026-04-05 03:53:50.255300 | orchestrator | 2026-04-05 03:53:50 | INFO  | It takes a moment until task 1bd97177-235d-498e-a69a-bc6654db34e8 (ceilometer) has been started and output is visible here. 2026-04-05 03:54:16.010445 | orchestrator | 2026-04-05 03:54:16.010532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:54:16.010542 | orchestrator | 2026-04-05 03:54:16.010549 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:54:16.010557 | orchestrator | Sunday 05 April 2026 03:53:54 +0000 (0:00:00.287) 0:00:00.287 ********** 2026-04-05 03:54:16.010564 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:54:16.010571 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:54:16.010578 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:54:16.010585 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:54:16.010591 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:54:16.010597 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:54:16.010603 | orchestrator | 2026-04-05 03:54:16.010610 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:54:16.010617 | orchestrator | Sunday 05 April 2026 03:53:55 +0000 (0:00:00.825) 0:00:01.113 ********** 2026-04-05 03:54:16.010624 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-05 03:54:16.010630 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-05 03:54:16.010637 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-05 03:54:16.010643 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-05 03:54:16.010649 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-05 03:54:16.010656 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-05 03:54:16.010662 | orchestrator | 2026-04-05 03:54:16.010668 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-05 03:54:16.010674 | orchestrator | 2026-04-05 03:54:16.010681 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-05 03:54:16.010688 | orchestrator | Sunday 05 April 2026 03:53:56 +0000 (0:00:00.692) 0:00:01.805 ********** 2026-04-05 03:54:16.010695 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:54:16.010702 | orchestrator | 2026-04-05 03:54:16.010709 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-04-05 03:54:16.010715 | orchestrator | Sunday 05 April 2026 03:53:57 +0000 (0:00:01.304) 0:00:03.110 ********** 2026-04-05 03:54:16.010722 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:16.010728 | orchestrator | 2026-04-05 03:54:16.010734 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-04-05 03:54:16.010741 | orchestrator | Sunday 05 April 2026 03:53:57 +0000 (0:00:00.141) 0:00:03.252 ********** 2026-04-05 03:54:16.010747 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:16.010753 | orchestrator | 2026-04-05 03:54:16.010760 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-04-05 03:54:16.010766 | orchestrator | Sunday 05 April 2026 03:53:57 +0000 (0:00:00.138) 0:00:03.390 ********** 2026-04-05 03:54:16.010793 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:54:16.010799 | orchestrator | 2026-04-05 03:54:16.010806 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-04-05 03:54:16.010812 | orchestrator | Sunday 05 April 2026 03:54:02 +0000 (0:00:04.414) 0:00:07.805 ********** 2026-04-05 03:54:16.010818 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:54:16.010825 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-04-05 03:54:16.010831 | orchestrator | 2026-04-05 03:54:16.010837 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-04-05 03:54:16.010843 | orchestrator | Sunday 05 April 2026 03:54:06 +0000 (0:00:04.184) 0:00:11.990 ********** 2026-04-05 03:54:16.010850 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:54:16.010856 | orchestrator | 2026-04-05 03:54:16.010862 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-04-05 03:54:16.010939 | orchestrator | Sunday 05 April 2026 03:54:09 +0000 (0:00:03.402) 0:00:15.393 ********** 2026-04-05 03:54:16.010947 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-04-05 03:54:16.010953 | orchestrator | 2026-04-05 03:54:16.010959 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-04-05 03:54:16.010966 | orchestrator | Sunday 05 April 2026 03:54:14 +0000 (0:00:04.359) 0:00:19.753 ********** 2026-04-05 03:54:16.010972 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:16.010978 | orchestrator | 2026-04-05 03:54:16.010984 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-05 03:54:16.010990 | orchestrator | Sunday 05 April 2026 03:54:14 +0000 (0:00:00.132) 0:00:19.886 ********** 2026-04-05 03:54:16.010999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:16.011023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:16.011030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:16.011038 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:16.011052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:16.011059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:16.011066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:16.011078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:21.061991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:21.062114 | orchestrator | 2026-04-05 03:54:21.062123 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-05 03:54:21.062150 | orchestrator | Sunday 05 April 2026 03:54:15 +0000 (0:00:01.579) 0:00:21.465 ********** 2026-04-05 03:54:21.062157 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 03:54:21.062163 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 03:54:21.062169 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:54:21.062174 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:54:21.062180 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 03:54:21.062185 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 03:54:21.062191 | orchestrator | 2026-04-05 03:54:21.062196 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-05 03:54:21.062203 | orchestrator | Sunday 05 April 2026 03:54:17 +0000 (0:00:01.810) 0:00:23.275 ********** 2026-04-05 03:54:21.062208 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:54:21.062214 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:54:21.062220 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:54:21.062225 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:54:21.062230 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:54:21.062236 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:54:21.062241 | orchestrator | 2026-04-05 03:54:21.062246 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-05 03:54:21.062252 | orchestrator | Sunday 05 April 2026 03:54:18 +0000 (0:00:00.611) 0:00:23.887 ********** 2026-04-05 03:54:21.062257 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:21.062263 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:21.062268 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:21.062274 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:21.062280 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:21.062285 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:21.062290 | orchestrator | 2026-04-05 03:54:21.062296 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-05 03:54:21.062302 | orchestrator | Sunday 05 April 2026 03:54:19 +0000 (0:00:00.833) 0:00:24.721 ********** 2026-04-05 03:54:21.062308 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:54:21.062313 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:54:21.062318 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:54:21.062323 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:54:21.062329 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:54:21.062360 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:54:21.062366 | orchestrator | 2026-04-05 03:54:21.062372 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-05 03:54:21.062377 | orchestrator | Sunday 05 April 2026 03:54:19 +0000 (0:00:00.639) 0:00:25.360 ********** 2026-04-05 03:54:21.062387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:21.062394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:21.062400 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:21.062424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:21.062431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:21.062437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:21.062442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:21.062448 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:21.062457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:21.062462 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:21.062468 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:21.062473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:21.062483 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:21.062493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:26.149176 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:26.149286 | orchestrator | 2026-04-05 03:54:26.149314 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-05 03:54:26.149326 | orchestrator | Sunday 05 April 2026 03:54:21 +0000 (0:00:01.157) 0:00:26.518 ********** 2026-04-05 03:54:26.149347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:26.149362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:26.149373 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:26.149403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:26.149422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:26.149465 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:26.149483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:26.149500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:26.149516 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:26.149554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:26.149574 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:26.149590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:26.149607 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:26.149633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:26.149652 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:26.149669 | orchestrator | 2026-04-05 03:54:26.149688 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-05 03:54:26.149720 | orchestrator | Sunday 05 April 2026 03:54:21 +0000 (0:00:00.872) 0:00:27.392 ********** 2026-04-05 03:54:26.149738 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:54:26.149755 | orchestrator | 2026-04-05 03:54:26.149773 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-05 03:54:26.149791 | orchestrator | Sunday 05 April 2026 03:54:22 +0000 (0:00:00.746) 0:00:28.139 ********** 2026-04-05 03:54:26.149810 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:54:26.149827 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:54:26.149844 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:54:26.149861 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:54:26.149878 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:54:26.149895 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:54:26.149937 | orchestrator | 2026-04-05 03:54:26.149954 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-05 03:54:26.149971 | orchestrator | Sunday 05 April 2026 03:54:23 +0000 (0:00:00.861) 0:00:29.000 ********** 2026-04-05 03:54:26.149987 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:54:26.150005 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:54:26.150097 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:54:26.150116 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:54:26.150130 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:54:26.150144 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:54:26.150159 | orchestrator | 2026-04-05 03:54:26.150174 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-05 03:54:26.150190 | orchestrator | Sunday 05 April 2026 03:54:24 +0000 (0:00:01.044) 0:00:30.045 ********** 2026-04-05 03:54:26.150205 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:26.150219 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:26.150236 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:26.150254 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:26.150270 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:26.150286 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:26.150302 | orchestrator | 2026-04-05 03:54:26.150319 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-05 03:54:26.150348 | orchestrator | Sunday 05 April 2026 03:54:25 +0000 (0:00:00.887) 0:00:30.932 ********** 2026-04-05 03:54:26.150359 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:26.150368 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:26.150379 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:26.150389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:26.150398 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:26.150408 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:26.150418 | orchestrator | 2026-04-05 03:54:31.463263 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-05 03:54:31.463371 | orchestrator | Sunday 05 April 2026 03:54:26 +0000 (0:00:00.680) 0:00:31.612 ********** 2026-04-05 03:54:31.463387 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:54:31.463399 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 03:54:31.463410 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 03:54:31.463420 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:54:31.463430 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 03:54:31.463440 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 03:54:31.463449 | orchestrator | 2026-04-05 03:54:31.463460 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-05 03:54:31.463470 | orchestrator | Sunday 05 April 2026 03:54:27 +0000 (0:00:01.576) 0:00:33.189 ********** 2026-04-05 03:54:31.463484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:31.463520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:31.463533 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:31.463558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:31.463569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:31.463579 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:31.463589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:31.463617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:31.463628 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:31.463638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:31.463656 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:31.463666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:31.463676 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:31.463691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:31.463702 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:31.463712 | orchestrator | 2026-04-05 03:54:31.463722 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-05 03:54:31.463732 | orchestrator | Sunday 05 April 2026 03:54:28 +0000 (0:00:00.858) 0:00:34.047 ********** 2026-04-05 03:54:31.463742 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:31.463752 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:31.463789 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:31.463802 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:31.463814 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:31.463825 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:31.463837 | orchestrator | 2026-04-05 03:54:31.463848 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-05 03:54:31.463860 | orchestrator | Sunday 05 April 2026 03:54:29 +0000 (0:00:00.864) 0:00:34.912 ********** 2026-04-05 03:54:31.463871 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 03:54:31.463882 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 03:54:31.463893 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:54:31.463904 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 03:54:31.463947 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:54:31.463961 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 03:54:31.463972 | orchestrator | 2026-04-05 03:54:31.463989 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-05 03:54:31.464005 | orchestrator | Sunday 05 April 2026 03:54:30 +0000 (0:00:01.533) 0:00:36.445 ********** 2026-04-05 03:54:31.464038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:37.645530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:37.645647 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:37.645667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:37.645697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:37.645713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:37.645726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:37.645739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:37.645754 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:37.645767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:37.645803 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:37.645835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:37.645850 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:37.645864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:37.645877 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:37.645890 | orchestrator | 2026-04-05 03:54:37.645903 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-05 03:54:37.645918 | orchestrator | Sunday 05 April 2026 03:54:32 +0000 (0:00:01.149) 0:00:37.595 ********** 2026-04-05 03:54:37.645955 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:37.645967 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:37.645979 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:37.645991 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:37.646004 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:37.646070 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:37.646085 | orchestrator | 2026-04-05 03:54:37.646098 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-05 03:54:37.646112 | orchestrator | Sunday 05 April 2026 03:54:32 +0000 (0:00:00.838) 0:00:38.434 ********** 2026-04-05 03:54:37.646125 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:37.646138 | orchestrator | 2026-04-05 03:54:37.646150 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-05 03:54:37.646164 | orchestrator | Sunday 05 April 2026 03:54:33 +0000 (0:00:00.161) 0:00:38.595 ********** 2026-04-05 03:54:37.646177 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:37.646191 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:37.646205 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:37.646217 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:37.646230 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:37.646243 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:37.646256 | orchestrator | 2026-04-05 03:54:37.646269 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-05 03:54:37.646282 | orchestrator | Sunday 05 April 2026 03:54:33 +0000 (0:00:00.643) 0:00:39.239 ********** 2026-04-05 03:54:37.646308 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:54:37.646323 | orchestrator | 2026-04-05 03:54:37.646336 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-05 03:54:37.646349 | orchestrator | Sunday 05 April 2026 03:54:35 +0000 (0:00:01.475) 0:00:40.714 ********** 2026-04-05 03:54:37.646363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:37.646387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:38.249791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:38.249901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:38.249979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:38.249993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:38.250079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:38.250093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:38.250125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:38.250136 | orchestrator | 2026-04-05 03:54:38.250148 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-05 03:54:38.250159 | orchestrator | Sunday 05 April 2026 03:54:37 +0000 (0:00:02.390) 0:00:43.105 ********** 2026-04-05 03:54:38.250170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:38.250187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:38.250207 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:38.250219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:38.250230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:38.250240 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:38.250250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:38.250268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:40.237245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237338 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:40.237349 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:40.237368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237389 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:40.237394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237399 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:40.237405 | orchestrator | 2026-04-05 03:54:40.237410 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-05 03:54:40.237417 | orchestrator | Sunday 05 April 2026 03:54:38 +0000 (0:00:00.953) 0:00:44.058 ********** 2026-04-05 03:54:40.237423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:40.237449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:40.237471 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:40.237481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:40.237492 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:40.237497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237502 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:40.237507 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:40.237512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:40.237517 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:40.237529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:48.300377 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:48.300532 | orchestrator | 2026-04-05 03:54:48.300578 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-05 03:54:48.300592 | orchestrator | Sunday 05 April 2026 03:54:40 +0000 (0:00:01.634) 0:00:45.693 ********** 2026-04-05 03:54:48.300626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:48.300809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:48.300831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:48.300851 | orchestrator | 2026-04-05 03:54:48.300865 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-05 03:54:48.300878 | orchestrator | Sunday 05 April 2026 03:54:42 +0000 (0:00:02.634) 0:00:48.327 ********** 2026-04-05 03:54:48.300892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:48.300929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:58.499704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:58.499844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:58.499863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:58.499877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:58.499890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:58.499902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:58.499938 | orchestrator | 2026-04-05 03:54:58.499952 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-05 03:54:58.499965 | orchestrator | Sunday 05 April 2026 03:54:48 +0000 (0:00:05.435) 0:00:53.762 ********** 2026-04-05 03:54:58.500043 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:54:58.500059 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 03:54:58.500071 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 03:54:58.500081 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:54:58.500092 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 03:54:58.500103 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 03:54:58.500114 | orchestrator | 2026-04-05 03:54:58.500126 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-05 03:54:58.500137 | orchestrator | Sunday 05 April 2026 03:54:50 +0000 (0:00:01.840) 0:00:55.603 ********** 2026-04-05 03:54:58.500148 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:58.500159 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:58.500170 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:58.500181 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:58.500192 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:58.500212 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:58.500226 | orchestrator | 2026-04-05 03:54:58.500239 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-05 03:54:58.500253 | orchestrator | Sunday 05 April 2026 03:54:50 +0000 (0:00:00.654) 0:00:56.257 ********** 2026-04-05 03:54:58.500266 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:58.500279 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:58.500292 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:58.500304 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:54:58.500317 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:54:58.500330 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:54:58.500342 | orchestrator | 2026-04-05 03:54:58.500355 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-05 03:54:58.500368 | orchestrator | Sunday 05 April 2026 03:54:52 +0000 (0:00:01.737) 0:00:57.994 ********** 2026-04-05 03:54:58.500380 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:58.500393 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:54:58.500406 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:54:58.500418 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:54:58.500431 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:54:58.500443 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:54:58.500455 | orchestrator | 2026-04-05 03:54:58.500467 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-05 03:54:58.500480 | orchestrator | Sunday 05 April 2026 03:54:53 +0000 (0:00:01.438) 0:00:59.433 ********** 2026-04-05 03:54:58.500492 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 03:54:58.500505 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 03:54:58.500517 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 03:54:58.500530 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 03:54:58.500543 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 03:54:58.500556 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 03:54:58.500568 | orchestrator | 2026-04-05 03:54:58.500579 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-05 03:54:58.500591 | orchestrator | Sunday 05 April 2026 03:54:55 +0000 (0:00:01.780) 0:01:01.214 ********** 2026-04-05 03:54:58.500612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:58.500625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:58.500637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:58.500656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:59.385665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:59.385784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:54:59.385831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:59.385847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:59.385859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:54:59.385872 | orchestrator | 2026-04-05 03:54:59.385887 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-05 03:54:59.385901 | orchestrator | Sunday 05 April 2026 03:54:58 +0000 (0:00:02.742) 0:01:03.956 ********** 2026-04-05 03:54:59.385915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:59.385960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:59.385972 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:54:59.386122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:59.386152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:59.386166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:54:59.386179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:54:59.386192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:54:59.386204 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:54:59.386218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:54:59.386231 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:54:59.386265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.051935 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:55:03.052733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.052775 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:55:03.052799 | orchestrator | 2026-04-05 03:55:03.052811 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-05 03:55:03.052820 | orchestrator | Sunday 05 April 2026 03:54:59 +0000 (0:00:00.892) 0:01:04.849 ********** 2026-04-05 03:55:03.052828 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:55:03.052835 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:55:03.052842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:55:03.052848 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:55:03.052856 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:55:03.052863 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:55:03.052871 | orchestrator | 2026-04-05 03:55:03.052878 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-05 03:55:03.052886 | orchestrator | Sunday 05 April 2026 03:55:00 +0000 (0:00:00.875) 0:01:05.724 ********** 2026-04-05 03:55:03.052896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.052906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:55:03.052914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.052922 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:55:03.052945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:55:03.052969 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:55:03.053029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.053035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 03:55:03.053040 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:55:03.053045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.053049 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:55:03.053055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.053062 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:55:03.053070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-05 03:55:03.053083 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:55:03.053090 | orchestrator | 2026-04-05 03:55:03.053103 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-04-05 03:55:03.053110 | orchestrator | Sunday 05 April 2026 03:55:01 +0000 (0:00:00.977) 0:01:06.702 ********** 2026-04-05 03:55:03.053127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:55:35.846228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:55:35.846341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 03:55:35.846359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:55:35.846369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:55:35.846388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-05 03:55:35.846413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:55:35.846434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:55:35.846441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 03:55:35.846447 | orchestrator | 2026-04-05 03:55:35.846455 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-05 03:55:35.846462 | orchestrator | Sunday 05 April 2026 03:55:03 +0000 (0:00:01.810) 0:01:08.512 ********** 2026-04-05 03:55:35.846468 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:55:35.846475 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:55:35.846481 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:55:35.846486 | orchestrator | skipping: [testbed-node-3] 2026-04-05 03:55:35.846492 | orchestrator | skipping: [testbed-node-4] 2026-04-05 03:55:35.846498 | orchestrator | skipping: [testbed-node-5] 2026-04-05 03:55:35.846504 | orchestrator | 2026-04-05 03:55:35.846510 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-05 03:55:35.846516 | orchestrator | Sunday 05 April 2026 03:55:03 +0000 (0:00:00.648) 0:01:09.161 ********** 2026-04-05 03:55:35.846522 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:55:35.846527 | orchestrator | 2026-04-05 03:55:35.846533 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 03:55:35.846539 | orchestrator | Sunday 05 April 2026 03:55:08 +0000 (0:00:05.012) 0:01:14.173 ********** 2026-04-05 03:55:35.846545 | orchestrator | 2026-04-05 03:55:35.846551 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 03:55:35.846557 | orchestrator | Sunday 05 April 2026 03:55:08 +0000 (0:00:00.076) 0:01:14.250 ********** 2026-04-05 03:55:35.846562 | orchestrator | 2026-04-05 03:55:35.846568 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 03:55:35.846581 | orchestrator | Sunday 05 April 2026 03:55:08 +0000 (0:00:00.073) 0:01:14.324 ********** 2026-04-05 03:55:35.846587 | orchestrator | 2026-04-05 03:55:35.846606 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 03:55:35.846618 | orchestrator | Sunday 05 April 2026 03:55:09 +0000 (0:00:00.273) 0:01:14.597 ********** 2026-04-05 03:55:35.846625 | orchestrator | 2026-04-05 03:55:35.846630 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 03:55:35.846636 | orchestrator | Sunday 05 April 2026 03:55:09 +0000 (0:00:00.075) 0:01:14.672 ********** 2026-04-05 03:55:35.846642 | orchestrator | 2026-04-05 03:55:35.846648 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 03:55:35.846654 | orchestrator | Sunday 05 April 2026 03:55:09 +0000 (0:00:00.073) 0:01:14.745 ********** 2026-04-05 03:55:35.846659 | orchestrator | 2026-04-05 03:55:35.846665 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-05 03:55:35.846671 | orchestrator | Sunday 05 April 2026 03:55:09 +0000 (0:00:00.071) 0:01:14.817 ********** 2026-04-05 03:55:35.846677 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:55:35.846682 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:55:35.846688 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:55:35.846694 | orchestrator | 2026-04-05 03:55:35.846700 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-05 03:55:35.846705 | orchestrator | Sunday 05 April 2026 03:55:14 +0000 (0:00:05.339) 0:01:20.156 ********** 2026-04-05 03:55:35.846711 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:55:35.846717 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:55:35.846727 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:55:35.846733 | orchestrator | 2026-04-05 03:55:35.846738 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-05 03:55:35.846744 | orchestrator | Sunday 05 April 2026 03:55:24 +0000 (0:00:09.953) 0:01:30.110 ********** 2026-04-05 03:55:35.846750 | orchestrator | changed: [testbed-node-4] 2026-04-05 03:55:35.846756 | orchestrator | changed: [testbed-node-3] 2026-04-05 03:55:35.846762 | orchestrator | changed: [testbed-node-5] 2026-04-05 03:55:35.846767 | orchestrator | 2026-04-05 03:55:35.846773 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:55:35.846791 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 03:55:35.846800 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 03:55:35.846811 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 03:55:36.363604 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 03:55:36.363684 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 03:55:36.363693 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 03:55:36.363701 | orchestrator | 2026-04-05 03:55:36.363718 | orchestrator | 2026-04-05 03:55:36.363724 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:55:36.363740 | orchestrator | Sunday 05 April 2026 03:55:35 +0000 (0:00:11.187) 0:01:41.297 ********** 2026-04-05 03:55:36.363746 | orchestrator | =============================================================================== 2026-04-05 03:55:36.363752 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.19s 2026-04-05 03:55:36.363759 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.95s 2026-04-05 03:55:36.363786 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.44s 2026-04-05 03:55:36.363792 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.34s 2026-04-05 03:55:36.363799 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 5.01s 2026-04-05 03:55:36.363805 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 4.42s 2026-04-05 03:55:36.363811 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.36s 2026-04-05 03:55:36.363817 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.18s 2026-04-05 03:55:36.363823 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.40s 2026-04-05 03:55:36.363829 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.74s 2026-04-05 03:55:36.363837 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.63s 2026-04-05 03:55:36.363841 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.39s 2026-04-05 03:55:36.363845 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.84s 2026-04-05 03:55:36.363848 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.81s 2026-04-05 03:55:36.363852 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.81s 2026-04-05 03:55:36.363856 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.78s 2026-04-05 03:55:36.363860 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.74s 2026-04-05 03:55:36.363864 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.63s 2026-04-05 03:55:36.363868 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.58s 2026-04-05 03:55:36.363872 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.58s 2026-04-05 03:55:38.817850 | orchestrator | 2026-04-05 03:55:38 | INFO  | Task bb2f98b7-777e-4e00-ab31-5917c6b9b9b3 (aodh) was prepared for execution. 2026-04-05 03:55:38.817950 | orchestrator | 2026-04-05 03:55:38 | INFO  | It takes a moment until task bb2f98b7-777e-4e00-ab31-5917c6b9b9b3 (aodh) has been started and output is visible here. 2026-04-05 03:56:13.168982 | orchestrator | 2026-04-05 03:56:13.169119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:56:13.169139 | orchestrator | 2026-04-05 03:56:13.169209 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:56:13.169223 | orchestrator | Sunday 05 April 2026 03:55:43 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-04-05 03:56:13.169235 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:56:13.169247 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:56:13.169258 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:56:13.169269 | orchestrator | 2026-04-05 03:56:13.169280 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:56:13.169292 | orchestrator | Sunday 05 April 2026 03:55:43 +0000 (0:00:00.381) 0:00:00.685 ********** 2026-04-05 03:56:13.169303 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-05 03:56:13.169330 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-05 03:56:13.169341 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-05 03:56:13.169362 | orchestrator | 2026-04-05 03:56:13.169374 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-05 03:56:13.169385 | orchestrator | 2026-04-05 03:56:13.169395 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-05 03:56:13.169406 | orchestrator | Sunday 05 April 2026 03:55:44 +0000 (0:00:00.588) 0:00:01.274 ********** 2026-04-05 03:56:13.169423 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:56:13.169443 | orchestrator | 2026-04-05 03:56:13.169468 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-04-05 03:56:13.169526 | orchestrator | Sunday 05 April 2026 03:55:45 +0000 (0:00:00.635) 0:00:01.910 ********** 2026-04-05 03:56:13.169545 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-04-05 03:56:13.169563 | orchestrator | 2026-04-05 03:56:13.169579 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-04-05 03:56:13.169597 | orchestrator | Sunday 05 April 2026 03:55:48 +0000 (0:00:03.758) 0:00:05.669 ********** 2026-04-05 03:56:13.169615 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-04-05 03:56:13.169632 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-04-05 03:56:13.169652 | orchestrator | 2026-04-05 03:56:13.169669 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-04-05 03:56:13.169689 | orchestrator | Sunday 05 April 2026 03:55:55 +0000 (0:00:06.662) 0:00:12.331 ********** 2026-04-05 03:56:13.169709 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:56:13.169730 | orchestrator | 2026-04-05 03:56:13.169749 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-04-05 03:56:13.169767 | orchestrator | Sunday 05 April 2026 03:55:59 +0000 (0:00:03.832) 0:00:16.163 ********** 2026-04-05 03:56:13.169781 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:56:13.169794 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-04-05 03:56:13.169807 | orchestrator | 2026-04-05 03:56:13.169820 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-04-05 03:56:13.169832 | orchestrator | Sunday 05 April 2026 03:56:03 +0000 (0:00:04.206) 0:00:20.369 ********** 2026-04-05 03:56:13.169845 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:56:13.169859 | orchestrator | 2026-04-05 03:56:13.169872 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-04-05 03:56:13.169884 | orchestrator | Sunday 05 April 2026 03:56:07 +0000 (0:00:03.370) 0:00:23.740 ********** 2026-04-05 03:56:13.169895 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-04-05 03:56:13.169906 | orchestrator | 2026-04-05 03:56:13.169916 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-05 03:56:13.169927 | orchestrator | Sunday 05 April 2026 03:56:11 +0000 (0:00:04.060) 0:00:27.800 ********** 2026-04-05 03:56:13.169942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:13.169979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:13.170011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:13.170170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:13.170184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:13.170196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:13.170207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:13.170229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:14.529536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:14.529638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:14.529653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:14.529663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:14.529672 | orchestrator | 2026-04-05 03:56:14.529682 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-05 03:56:14.529692 | orchestrator | Sunday 05 April 2026 03:56:13 +0000 (0:00:02.052) 0:00:29.853 ********** 2026-04-05 03:56:14.529701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:56:14.529712 | orchestrator | 2026-04-05 03:56:14.529721 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-05 03:56:14.529730 | orchestrator | Sunday 05 April 2026 03:56:13 +0000 (0:00:00.131) 0:00:29.984 ********** 2026-04-05 03:56:14.529738 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:56:14.529747 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:56:14.529756 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:56:14.529764 | orchestrator | 2026-04-05 03:56:14.529773 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-05 03:56:14.529781 | orchestrator | Sunday 05 April 2026 03:56:13 +0000 (0:00:00.544) 0:00:30.529 ********** 2026-04-05 03:56:14.529791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:14.529842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:14.529859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:14.529869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:14.529878 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:56:14.529887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:14.529896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:14.529905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:14.529930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:19.990506 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:56:19.990667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:19.990687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:19.990701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:19.990713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:19.990725 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:56:19.990737 | orchestrator | 2026-04-05 03:56:19.990750 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-05 03:56:19.990763 | orchestrator | Sunday 05 April 2026 03:56:14 +0000 (0:00:00.687) 0:00:31.216 ********** 2026-04-05 03:56:19.990805 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:56:19.990817 | orchestrator | 2026-04-05 03:56:19.990828 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-05 03:56:19.990839 | orchestrator | Sunday 05 April 2026 03:56:15 +0000 (0:00:00.862) 0:00:32.079 ********** 2026-04-05 03:56:19.990851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:19.990892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:19.990905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:19.990917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:19.990928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:19.990949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:19.990961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:19.990986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:20.732065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:20.732232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:20.732254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:20.732269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:20.732312 | orchestrator | 2026-04-05 03:56:20.732326 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-05 03:56:20.732335 | orchestrator | Sunday 05 April 2026 03:56:19 +0000 (0:00:04.595) 0:00:36.674 ********** 2026-04-05 03:56:20.732347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:20.732370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:20.732398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:20.732407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:20.732415 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:56:20.732425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:20.732439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:20.732448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:20.732456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:20.732464 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:56:20.732483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:21.820729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:21.820820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:21.820852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:21.820862 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:56:21.820872 | orchestrator | 2026-04-05 03:56:21.820882 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-05 03:56:21.820892 | orchestrator | Sunday 05 April 2026 03:56:20 +0000 (0:00:00.746) 0:00:37.421 ********** 2026-04-05 03:56:21.820901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:21.820922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:21.820931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:21.820956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:21.820965 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:56:21.820981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:21.820990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:21.820998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:21.821007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:21.821016 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:56:21.821034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-05 03:56:25.940694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 03:56:25.940830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 03:56:25.940857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 03:56:25.940892 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:56:25.940914 | orchestrator | 2026-04-05 03:56:25.940934 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-05 03:56:25.940955 | orchestrator | Sunday 05 April 2026 03:56:21 +0000 (0:00:01.082) 0:00:38.503 ********** 2026-04-05 03:56:25.940973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:25.941012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:25.941059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:25.941094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:25.941114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:25.941143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:25.941163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:25.941222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:25.941243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:25.941288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881342 | orchestrator | 2026-04-05 03:56:34.881361 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-05 03:56:34.881378 | orchestrator | Sunday 05 April 2026 03:56:25 +0000 (0:00:04.120) 0:00:42.623 ********** 2026-04-05 03:56:34.881395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:34.881431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:34.881449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:34.881517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:34.881661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255480 | orchestrator | 2026-04-05 03:56:40.255489 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-05 03:56:40.255496 | orchestrator | Sunday 05 April 2026 03:56:34 +0000 (0:00:08.943) 0:00:51.567 ********** 2026-04-05 03:56:40.255501 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:56:40.255507 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:56:40.255511 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:56:40.255516 | orchestrator | 2026-04-05 03:56:40.255521 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-04-05 03:56:40.255526 | orchestrator | Sunday 05 April 2026 03:56:36 +0000 (0:00:01.941) 0:00:53.509 ********** 2026-04-05 03:56:40.255532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:40.255550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:40.255570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-05 03:56:40.255586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:56:40.255632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:57:37.324881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 03:57:37.324963 | orchestrator | 2026-04-05 03:57:37.324971 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-05 03:57:37.324976 | orchestrator | Sunday 05 April 2026 03:56:40 +0000 (0:00:03.434) 0:00:56.944 ********** 2026-04-05 03:57:37.324981 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:57:37.324985 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:57:37.324989 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:57:37.324993 | orchestrator | 2026-04-05 03:57:37.324997 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-04-05 03:57:37.325001 | orchestrator | Sunday 05 April 2026 03:56:40 +0000 (0:00:00.368) 0:00:57.313 ********** 2026-04-05 03:57:37.325005 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:57:37.325009 | orchestrator | 2026-04-05 03:57:37.325013 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-04-05 03:57:37.325017 | orchestrator | Sunday 05 April 2026 03:56:42 +0000 (0:00:02.236) 0:00:59.549 ********** 2026-04-05 03:57:37.325021 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:57:37.325027 | orchestrator | 2026-04-05 03:57:37.325055 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-05 03:57:37.325062 | orchestrator | Sunday 05 April 2026 03:56:45 +0000 (0:00:02.393) 0:01:01.943 ********** 2026-04-05 03:57:37.325068 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:57:37.325074 | orchestrator | 2026-04-05 03:57:37.325081 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-05 03:57:37.325087 | orchestrator | Sunday 05 April 2026 03:56:58 +0000 (0:00:13.539) 0:01:15.483 ********** 2026-04-05 03:57:37.325094 | orchestrator | 2026-04-05 03:57:37.325101 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-05 03:57:37.325107 | orchestrator | Sunday 05 April 2026 03:56:58 +0000 (0:00:00.071) 0:01:15.555 ********** 2026-04-05 03:57:37.325112 | orchestrator | 2026-04-05 03:57:37.325118 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-05 03:57:37.325125 | orchestrator | Sunday 05 April 2026 03:56:58 +0000 (0:00:00.070) 0:01:15.625 ********** 2026-04-05 03:57:37.325130 | orchestrator | 2026-04-05 03:57:37.325134 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-05 03:57:37.325138 | orchestrator | Sunday 05 April 2026 03:56:59 +0000 (0:00:00.304) 0:01:15.930 ********** 2026-04-05 03:57:37.325142 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:57:37.325158 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:57:37.325163 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:57:37.325169 | orchestrator | 2026-04-05 03:57:37.325174 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-05 03:57:37.325180 | orchestrator | Sunday 05 April 2026 03:57:10 +0000 (0:00:10.839) 0:01:26.770 ********** 2026-04-05 03:57:37.325186 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:57:37.325192 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:57:37.325200 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:57:37.325204 | orchestrator | 2026-04-05 03:57:37.325208 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-05 03:57:37.325211 | orchestrator | Sunday 05 April 2026 03:57:20 +0000 (0:00:10.566) 0:01:37.336 ********** 2026-04-05 03:57:37.325215 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:57:37.325219 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:57:37.325222 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:57:37.325226 | orchestrator | 2026-04-05 03:57:37.325230 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-05 03:57:37.325234 | orchestrator | Sunday 05 April 2026 03:57:31 +0000 (0:00:10.401) 0:01:47.738 ********** 2026-04-05 03:57:37.325237 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:57:37.325241 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:57:37.325245 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:57:37.325248 | orchestrator | 2026-04-05 03:57:37.325252 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:57:37.325257 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 03:57:37.325262 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 03:57:37.325266 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 03:57:37.325270 | orchestrator | 2026-04-05 03:57:37.325273 | orchestrator | 2026-04-05 03:57:37.325277 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:57:37.325281 | orchestrator | Sunday 05 April 2026 03:57:36 +0000 (0:00:05.843) 0:01:53.581 ********** 2026-04-05 03:57:37.325285 | orchestrator | =============================================================================== 2026-04-05 03:57:37.325288 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.54s 2026-04-05 03:57:37.325292 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.84s 2026-04-05 03:57:37.325314 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.57s 2026-04-05 03:57:37.325344 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.40s 2026-04-05 03:57:37.325348 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.94s 2026-04-05 03:57:37.325352 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.66s 2026-04-05 03:57:37.325356 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.84s 2026-04-05 03:57:37.325360 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.60s 2026-04-05 03:57:37.325363 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.21s 2026-04-05 03:57:37.325367 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.12s 2026-04-05 03:57:37.325371 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 4.06s 2026-04-05 03:57:37.325375 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.83s 2026-04-05 03:57:37.325378 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.76s 2026-04-05 03:57:37.325382 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.43s 2026-04-05 03:57:37.325386 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.37s 2026-04-05 03:57:37.325389 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.39s 2026-04-05 03:57:37.325393 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.24s 2026-04-05 03:57:37.325397 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.05s 2026-04-05 03:57:37.325401 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.94s 2026-04-05 03:57:37.325404 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.08s 2026-04-05 03:57:39.854766 | orchestrator | 2026-04-05 03:57:39 | INFO  | Task 40dc9b48-4e7c-4002-b22a-17dac0648c77 (kolla-ceph-rgw) was prepared for execution. 2026-04-05 03:57:39.854849 | orchestrator | 2026-04-05 03:57:39 | INFO  | It takes a moment until task 40dc9b48-4e7c-4002-b22a-17dac0648c77 (kolla-ceph-rgw) has been started and output is visible here. 2026-04-05 03:58:18.873024 | orchestrator | 2026-04-05 03:58:18.873147 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:58:18.873171 | orchestrator | 2026-04-05 03:58:18.873186 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:58:18.873200 | orchestrator | Sunday 05 April 2026 03:57:44 +0000 (0:00:00.321) 0:00:00.321 ********** 2026-04-05 03:58:18.873247 | orchestrator | ok: [testbed-manager] 2026-04-05 03:58:18.873265 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:58:18.873280 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:58:18.873294 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:58:18.873308 | orchestrator | ok: [testbed-node-3] 2026-04-05 03:58:18.873321 | orchestrator | ok: [testbed-node-4] 2026-04-05 03:58:18.873336 | orchestrator | ok: [testbed-node-5] 2026-04-05 03:58:18.873351 | orchestrator | 2026-04-05 03:58:18.873445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:58:18.873465 | orchestrator | Sunday 05 April 2026 03:57:45 +0000 (0:00:00.970) 0:00:01.291 ********** 2026-04-05 03:58:18.873480 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-05 03:58:18.873496 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-05 03:58:18.873510 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-05 03:58:18.873524 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-05 03:58:18.873537 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-05 03:58:18.873553 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-05 03:58:18.873569 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-05 03:58:18.873614 | orchestrator | 2026-04-05 03:58:18.873631 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-05 03:58:18.873646 | orchestrator | 2026-04-05 03:58:18.873661 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-05 03:58:18.873676 | orchestrator | Sunday 05 April 2026 03:57:46 +0000 (0:00:00.878) 0:00:02.170 ********** 2026-04-05 03:58:18.873693 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 03:58:18.873711 | orchestrator | 2026-04-05 03:58:18.873727 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-05 03:58:18.873742 | orchestrator | Sunday 05 April 2026 03:57:48 +0000 (0:00:01.695) 0:00:03.866 ********** 2026-04-05 03:58:18.873758 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-05 03:58:18.873768 | orchestrator | 2026-04-05 03:58:18.873777 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-05 03:58:18.873786 | orchestrator | Sunday 05 April 2026 03:57:52 +0000 (0:00:03.987) 0:00:07.854 ********** 2026-04-05 03:58:18.873795 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-05 03:58:18.873806 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-05 03:58:18.873815 | orchestrator | 2026-04-05 03:58:18.873823 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-05 03:58:18.873832 | orchestrator | Sunday 05 April 2026 03:57:58 +0000 (0:00:06.846) 0:00:14.700 ********** 2026-04-05 03:58:18.873840 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-05 03:58:18.873849 | orchestrator | 2026-04-05 03:58:18.873857 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-05 03:58:18.873866 | orchestrator | Sunday 05 April 2026 03:58:02 +0000 (0:00:03.317) 0:00:18.018 ********** 2026-04-05 03:58:18.873874 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:58:18.873883 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-05 03:58:18.873891 | orchestrator | 2026-04-05 03:58:18.873900 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-05 03:58:18.873908 | orchestrator | Sunday 05 April 2026 03:58:06 +0000 (0:00:04.028) 0:00:22.046 ********** 2026-04-05 03:58:18.873917 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-05 03:58:18.873926 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-05 03:58:18.873934 | orchestrator | 2026-04-05 03:58:18.873942 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-05 03:58:18.873951 | orchestrator | Sunday 05 April 2026 03:58:12 +0000 (0:00:06.515) 0:00:28.562 ********** 2026-04-05 03:58:18.873960 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-05 03:58:18.873968 | orchestrator | 2026-04-05 03:58:18.873977 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:58:18.873986 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:18.873995 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:18.874004 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:18.874070 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:18.874081 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:18.874121 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:18.874130 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:18.874139 | orchestrator | 2026-04-05 03:58:18.874148 | orchestrator | 2026-04-05 03:58:18.874156 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:58:18.874165 | orchestrator | Sunday 05 April 2026 03:58:18 +0000 (0:00:05.508) 0:00:34.070 ********** 2026-04-05 03:58:18.874174 | orchestrator | =============================================================================== 2026-04-05 03:58:18.874189 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.85s 2026-04-05 03:58:18.874198 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.52s 2026-04-05 03:58:18.874207 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.51s 2026-04-05 03:58:18.874216 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.03s 2026-04-05 03:58:18.874224 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.99s 2026-04-05 03:58:18.874233 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.32s 2026-04-05 03:58:18.874241 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.70s 2026-04-05 03:58:18.874250 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.97s 2026-04-05 03:58:18.874259 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-04-05 03:58:21.456453 | orchestrator | 2026-04-05 03:58:21 | INFO  | Task 46066565-90a8-4742-81b2-b414f4f541ee (gnocchi) was prepared for execution. 2026-04-05 03:58:21.456572 | orchestrator | 2026-04-05 03:58:21 | INFO  | It takes a moment until task 46066565-90a8-4742-81b2-b414f4f541ee (gnocchi) has been started and output is visible here. 2026-04-05 03:58:27.377486 | orchestrator | 2026-04-05 03:58:27.377597 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:58:27.377610 | orchestrator | 2026-04-05 03:58:27.377618 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:58:27.377627 | orchestrator | Sunday 05 April 2026 03:58:26 +0000 (0:00:00.333) 0:00:00.333 ********** 2026-04-05 03:58:27.377634 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:58:27.377642 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:58:27.377649 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:58:27.377656 | orchestrator | 2026-04-05 03:58:27.377665 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:58:27.377676 | orchestrator | Sunday 05 April 2026 03:58:26 +0000 (0:00:00.348) 0:00:00.682 ********** 2026-04-05 03:58:27.377687 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-05 03:58:27.377699 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-05 03:58:27.377711 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-05 03:58:27.377724 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-05 03:58:27.377734 | orchestrator | 2026-04-05 03:58:27.377744 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-05 03:58:27.377754 | orchestrator | skipping: no hosts matched 2026-04-05 03:58:27.377766 | orchestrator | 2026-04-05 03:58:27.377778 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 03:58:27.377791 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:27.377805 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:27.377819 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 03:58:27.377861 | orchestrator | 2026-04-05 03:58:27.377873 | orchestrator | 2026-04-05 03:58:27.377885 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 03:58:27.377897 | orchestrator | Sunday 05 April 2026 03:58:26 +0000 (0:00:00.422) 0:00:01.104 ********** 2026-04-05 03:58:27.377908 | orchestrator | =============================================================================== 2026-04-05 03:58:27.377918 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-04-05 03:58:27.377925 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-05 03:58:29.971991 | orchestrator | 2026-04-05 03:58:29 | INFO  | Task e4c6a1c9-8b12-4d8b-8097-b90f3e4b0eb9 (manila) was prepared for execution. 2026-04-05 03:58:29.972388 | orchestrator | 2026-04-05 03:58:29 | INFO  | It takes a moment until task e4c6a1c9-8b12-4d8b-8097-b90f3e4b0eb9 (manila) has been started and output is visible here. 2026-04-05 03:59:13.939671 | orchestrator | 2026-04-05 03:59:13.939789 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 03:59:13.939808 | orchestrator | 2026-04-05 03:59:13.939821 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 03:59:13.939834 | orchestrator | Sunday 05 April 2026 03:58:34 +0000 (0:00:00.304) 0:00:00.304 ********** 2026-04-05 03:59:13.939846 | orchestrator | ok: [testbed-node-0] 2026-04-05 03:59:13.939858 | orchestrator | ok: [testbed-node-1] 2026-04-05 03:59:13.939869 | orchestrator | ok: [testbed-node-2] 2026-04-05 03:59:13.939880 | orchestrator | 2026-04-05 03:59:13.939891 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 03:59:13.939902 | orchestrator | Sunday 05 April 2026 03:58:35 +0000 (0:00:00.354) 0:00:00.658 ********** 2026-04-05 03:59:13.939913 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-05 03:59:13.939925 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-05 03:59:13.939936 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-05 03:59:13.939947 | orchestrator | 2026-04-05 03:59:13.939957 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-05 03:59:13.939968 | orchestrator | 2026-04-05 03:59:13.939979 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-05 03:59:13.939990 | orchestrator | Sunday 05 April 2026 03:58:35 +0000 (0:00:00.481) 0:00:01.139 ********** 2026-04-05 03:59:13.940017 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:59:13.940029 | orchestrator | 2026-04-05 03:59:13.940040 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-05 03:59:13.940051 | orchestrator | Sunday 05 April 2026 03:58:36 +0000 (0:00:00.675) 0:00:01.815 ********** 2026-04-05 03:59:13.940063 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:59:13.940074 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:59:13.940086 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:59:13.940097 | orchestrator | 2026-04-05 03:59:13.940107 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-04-05 03:59:13.940118 | orchestrator | Sunday 05 April 2026 03:58:36 +0000 (0:00:00.506) 0:00:02.322 ********** 2026-04-05 03:59:13.940129 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-04-05 03:59:13.940141 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-04-05 03:59:13.940152 | orchestrator | 2026-04-05 03:59:13.940163 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-04-05 03:59:13.940174 | orchestrator | Sunday 05 April 2026 03:58:43 +0000 (0:00:06.790) 0:00:09.113 ********** 2026-04-05 03:59:13.940185 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-04-05 03:59:13.940200 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-04-05 03:59:13.940237 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-04-05 03:59:13.940250 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-04-05 03:59:13.940263 | orchestrator | 2026-04-05 03:59:13.940276 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-04-05 03:59:13.940287 | orchestrator | Sunday 05 April 2026 03:58:56 +0000 (0:00:13.480) 0:00:22.593 ********** 2026-04-05 03:59:13.940297 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 03:59:13.940308 | orchestrator | 2026-04-05 03:59:13.940319 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-04-05 03:59:13.940330 | orchestrator | Sunday 05 April 2026 03:59:00 +0000 (0:00:03.429) 0:00:26.023 ********** 2026-04-05 03:59:13.940341 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 03:59:13.940352 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-04-05 03:59:13.940363 | orchestrator | 2026-04-05 03:59:13.940374 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-04-05 03:59:13.940385 | orchestrator | Sunday 05 April 2026 03:59:04 +0000 (0:00:04.022) 0:00:30.045 ********** 2026-04-05 03:59:13.940395 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 03:59:13.940406 | orchestrator | 2026-04-05 03:59:13.940418 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-04-05 03:59:13.940428 | orchestrator | Sunday 05 April 2026 03:59:07 +0000 (0:00:03.323) 0:00:33.369 ********** 2026-04-05 03:59:13.940439 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-04-05 03:59:13.940450 | orchestrator | 2026-04-05 03:59:13.940493 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-05 03:59:13.940507 | orchestrator | Sunday 05 April 2026 03:59:11 +0000 (0:00:03.941) 0:00:37.311 ********** 2026-04-05 03:59:13.940540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:13.940556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:13.940574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:13.940597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:13.940610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:13.940622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:13.940642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:25.389554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:25.389717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:25.389762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:25.389806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:25.389819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:25.389830 | orchestrator | 2026-04-05 03:59:25.389843 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-05 03:59:25.389854 | orchestrator | Sunday 05 April 2026 03:59:14 +0000 (0:00:02.343) 0:00:39.654 ********** 2026-04-05 03:59:25.389865 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:59:25.389875 | orchestrator | 2026-04-05 03:59:25.389885 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-05 03:59:25.389894 | orchestrator | Sunday 05 April 2026 03:59:14 +0000 (0:00:00.629) 0:00:40.284 ********** 2026-04-05 03:59:25.389904 | orchestrator | changed: [testbed-node-0] 2026-04-05 03:59:25.389915 | orchestrator | changed: [testbed-node-1] 2026-04-05 03:59:25.389924 | orchestrator | changed: [testbed-node-2] 2026-04-05 03:59:25.389934 | orchestrator | 2026-04-05 03:59:25.389944 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-05 03:59:25.389983 | orchestrator | Sunday 05 April 2026 03:59:15 +0000 (0:00:01.162) 0:00:41.447 ********** 2026-04-05 03:59:25.389994 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 03:59:25.390077 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 03:59:25.390090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 03:59:25.390110 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 03:59:25.390122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 03:59:25.390138 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 03:59:25.390148 | orchestrator | 2026-04-05 03:59:25.390159 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-05 03:59:25.390169 | orchestrator | Sunday 05 April 2026 03:59:17 +0000 (0:00:01.938) 0:00:43.385 ********** 2026-04-05 03:59:25.390178 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 03:59:25.390189 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 03:59:25.390199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 03:59:25.390209 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 03:59:25.390219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 03:59:25.390229 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 03:59:25.390238 | orchestrator | 2026-04-05 03:59:25.390248 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-05 03:59:25.390257 | orchestrator | Sunday 05 April 2026 03:59:19 +0000 (0:00:01.414) 0:00:44.800 ********** 2026-04-05 03:59:25.390268 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-05 03:59:25.390277 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-05 03:59:25.390287 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-05 03:59:25.390295 | orchestrator | 2026-04-05 03:59:25.390304 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-05 03:59:25.390313 | orchestrator | Sunday 05 April 2026 03:59:19 +0000 (0:00:00.739) 0:00:45.539 ********** 2026-04-05 03:59:25.390322 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:59:25.390331 | orchestrator | 2026-04-05 03:59:25.390341 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-05 03:59:25.390351 | orchestrator | Sunday 05 April 2026 03:59:20 +0000 (0:00:00.149) 0:00:45.688 ********** 2026-04-05 03:59:25.390361 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:59:25.390370 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:59:25.390379 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:59:25.390389 | orchestrator | 2026-04-05 03:59:25.390398 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-05 03:59:25.390441 | orchestrator | Sunday 05 April 2026 03:59:20 +0000 (0:00:00.607) 0:00:46.296 ********** 2026-04-05 03:59:25.390451 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 03:59:25.390461 | orchestrator | 2026-04-05 03:59:25.390470 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-05 03:59:25.390513 | orchestrator | Sunday 05 April 2026 03:59:21 +0000 (0:00:00.636) 0:00:46.932 ********** 2026-04-05 03:59:25.390542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:26.338628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:26.338755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:26.338779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:26.338963 | orchestrator | 2026-04-05 03:59:26.338973 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-05 03:59:26.338982 | orchestrator | Sunday 05 April 2026 03:59:25 +0000 (0:00:04.193) 0:00:51.126 ********** 2026-04-05 03:59:26.339008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:27.073615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.073744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.073760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.073772 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:59:27.073797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:27.073832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.073855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.073888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.073905 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:59:27.073945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:27.073964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.073997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.074170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:27.074204 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:59:27.074215 | orchestrator | 2026-04-05 03:59:27.074227 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-05 03:59:27.074253 | orchestrator | Sunday 05 April 2026 03:59:26 +0000 (0:00:00.933) 0:00:52.060 ********** 2026-04-05 03:59:27.074362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:31.748755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.748855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.748872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.748902 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:59:31.748915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:31.748926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.748935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.748973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.748983 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:59:31.748993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:31.749009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.749019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.749028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:31.749037 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:59:31.749046 | orchestrator | 2026-04-05 03:59:31.749056 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-05 03:59:31.749068 | orchestrator | Sunday 05 April 2026 03:59:27 +0000 (0:00:00.968) 0:00:53.029 ********** 2026-04-05 03:59:31.749101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:38.858188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:38.858329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:38.858353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:38.858589 | orchestrator | 2026-04-05 03:59:38.858600 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-05 03:59:38.858610 | orchestrator | Sunday 05 April 2026 03:59:32 +0000 (0:00:04.663) 0:00:57.692 ********** 2026-04-05 03:59:38.858634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:43.493550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:43.493653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 03:59:43.493663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:43.493670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:43.493687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:43.493704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:43.493713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:43.493718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:43.493723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:43.493728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:43.493733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 03:59:43.493737 | orchestrator | 2026-04-05 03:59:43.493744 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-05 03:59:43.493751 | orchestrator | Sunday 05 April 2026 03:59:38 +0000 (0:00:06.893) 0:01:04.586 ********** 2026-04-05 03:59:43.493760 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-05 03:59:43.493773 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-05 03:59:43.493780 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-05 03:59:43.493788 | orchestrator | 2026-04-05 03:59:43.493796 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-05 03:59:43.493809 | orchestrator | Sunday 05 April 2026 03:59:42 +0000 (0:00:03.917) 0:01:08.504 ********** 2026-04-05 03:59:43.493824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:46.948005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948115 | orchestrator | skipping: [testbed-node-0] 2026-04-05 03:59:46.948123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:46.948144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948194 | orchestrator | skipping: [testbed-node-1] 2026-04-05 03:59:46.948200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-05 03:59:46.948206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 03:59:46.948232 | orchestrator | skipping: [testbed-node-2] 2026-04-05 03:59:46.948238 | orchestrator | 2026-04-05 03:59:46.948245 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-04-05 03:59:46.948253 | orchestrator | Sunday 05 April 2026 03:59:43 +0000 (0:00:00.727) 0:01:09.231 ********** 2026-04-05 03:59:46.948264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 04:00:30.539448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 04:00:30.540218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-05 04:00:30.540248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 04:00:30.540373 | orchestrator | 2026-04-05 04:00:30.540378 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-04-05 04:00:30.540384 | orchestrator | Sunday 05 April 2026 03:59:47 +0000 (0:00:03.441) 0:01:12.673 ********** 2026-04-05 04:00:30.540388 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:00:30.540393 | orchestrator | 2026-04-05 04:00:30.540397 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-04-05 04:00:30.540401 | orchestrator | Sunday 05 April 2026 03:59:49 +0000 (0:00:02.259) 0:01:14.932 ********** 2026-04-05 04:00:30.540404 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:00:30.540408 | orchestrator | 2026-04-05 04:00:30.540412 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-05 04:00:30.540416 | orchestrator | Sunday 05 April 2026 03:59:51 +0000 (0:00:02.385) 0:01:17.318 ********** 2026-04-05 04:00:30.540419 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:00:30.540423 | orchestrator | 2026-04-05 04:00:30.540427 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-05 04:00:30.540431 | orchestrator | Sunday 05 April 2026 04:00:30 +0000 (0:00:38.583) 0:01:55.901 ********** 2026-04-05 04:00:30.540434 | orchestrator | 2026-04-05 04:00:30.540442 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-05 04:01:16.869304 | orchestrator | Sunday 05 April 2026 04:00:30 +0000 (0:00:00.077) 0:01:55.978 ********** 2026-04-05 04:01:16.869417 | orchestrator | 2026-04-05 04:01:16.869442 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-05 04:01:16.869460 | orchestrator | Sunday 05 April 2026 04:00:30 +0000 (0:00:00.084) 0:01:56.063 ********** 2026-04-05 04:01:16.869498 | orchestrator | 2026-04-05 04:01:16.869519 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-05 04:01:16.869538 | orchestrator | Sunday 05 April 2026 04:00:30 +0000 (0:00:00.089) 0:01:56.152 ********** 2026-04-05 04:01:16.869559 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:01:16.869573 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:01:16.869584 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:01:16.869595 | orchestrator | 2026-04-05 04:01:16.869607 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-05 04:01:16.869674 | orchestrator | Sunday 05 April 2026 04:00:40 +0000 (0:00:10.416) 0:02:06.569 ********** 2026-04-05 04:01:16.869687 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:01:16.869698 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:01:16.869709 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:01:16.869721 | orchestrator | 2026-04-05 04:01:16.869732 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-05 04:01:16.869773 | orchestrator | Sunday 05 April 2026 04:00:52 +0000 (0:00:11.626) 0:02:18.196 ********** 2026-04-05 04:01:16.869785 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:01:16.869796 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:01:16.869807 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:01:16.869818 | orchestrator | 2026-04-05 04:01:16.869829 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-05 04:01:16.869842 | orchestrator | Sunday 05 April 2026 04:00:58 +0000 (0:00:05.596) 0:02:23.793 ********** 2026-04-05 04:01:16.869855 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:01:16.869868 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:01:16.869880 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:01:16.869899 | orchestrator | 2026-04-05 04:01:16.869920 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:01:16.869940 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:01:16.869961 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 04:01:16.869979 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 04:01:16.869997 | orchestrator | 2026-04-05 04:01:16.870093 | orchestrator | 2026-04-05 04:01:16.870117 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:01:16.870136 | orchestrator | Sunday 05 April 2026 04:01:16 +0000 (0:00:18.166) 0:02:41.959 ********** 2026-04-05 04:01:16.870155 | orchestrator | =============================================================================== 2026-04-05 04:01:16.870173 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 38.58s 2026-04-05 04:01:16.870193 | orchestrator | manila : Restart manila-share container -------------------------------- 18.17s 2026-04-05 04:01:16.870213 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.48s 2026-04-05 04:01:16.870231 | orchestrator | manila : Restart manila-data container --------------------------------- 11.63s 2026-04-05 04:01:16.870246 | orchestrator | manila : Restart manila-api container ---------------------------------- 10.42s 2026-04-05 04:01:16.870272 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.89s 2026-04-05 04:01:16.870284 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.79s 2026-04-05 04:01:16.870295 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 5.60s 2026-04-05 04:01:16.870306 | orchestrator | manila : Copying over config.json files for services -------------------- 4.66s 2026-04-05 04:01:16.870317 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.19s 2026-04-05 04:01:16.870328 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.02s 2026-04-05 04:01:16.870339 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.94s 2026-04-05 04:01:16.870349 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.92s 2026-04-05 04:01:16.870360 | orchestrator | manila : Check manila containers ---------------------------------------- 3.44s 2026-04-05 04:01:16.870372 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.43s 2026-04-05 04:01:16.870383 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.32s 2026-04-05 04:01:16.870394 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.39s 2026-04-05 04:01:16.870405 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.34s 2026-04-05 04:01:16.870416 | orchestrator | manila : Creating Manila database --------------------------------------- 2.26s 2026-04-05 04:01:16.870427 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.94s 2026-04-05 04:01:17.253154 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-04-05 04:01:29.476180 | orchestrator | 2026-04-05 04:01:29 | INFO  | Task b5f183d8-fb0a-472e-9fd6-d6ba1b7e05f4 (netdata) was prepared for execution. 2026-04-05 04:01:29.476285 | orchestrator | 2026-04-05 04:01:29 | INFO  | It takes a moment until task b5f183d8-fb0a-472e-9fd6-d6ba1b7e05f4 (netdata) has been started and output is visible here. 2026-04-05 04:03:08.257804 | orchestrator | 2026-04-05 04:03:08.257908 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:03:08.257922 | orchestrator | 2026-04-05 04:03:08.257929 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:03:08.257937 | orchestrator | Sunday 05 April 2026 04:01:34 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-04-05 04:03:08.257945 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-05 04:03:08.257951 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-05 04:03:08.257957 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-05 04:03:08.257964 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-05 04:03:08.257971 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-05 04:03:08.257977 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-05 04:03:08.257984 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-05 04:03:08.257991 | orchestrator | 2026-04-05 04:03:08.257998 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-05 04:03:08.258004 | orchestrator | 2026-04-05 04:03:08.258010 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-05 04:03:08.258059 | orchestrator | Sunday 05 April 2026 04:01:35 +0000 (0:00:01.079) 0:00:01.339 ********** 2026-04-05 04:03:08.258067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:03:08.258072 | orchestrator | 2026-04-05 04:03:08.258077 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-05 04:03:08.258081 | orchestrator | Sunday 05 April 2026 04:01:36 +0000 (0:00:01.429) 0:00:02.768 ********** 2026-04-05 04:03:08.258085 | orchestrator | ok: [testbed-manager] 2026-04-05 04:03:08.258090 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:03:08.258095 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:03:08.258099 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:03:08.258103 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:03:08.258106 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:03:08.258110 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:03:08.258115 | orchestrator | 2026-04-05 04:03:08.258119 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-05 04:03:08.258123 | orchestrator | Sunday 05 April 2026 04:01:38 +0000 (0:00:02.023) 0:00:04.791 ********** 2026-04-05 04:03:08.258127 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:03:08.258131 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:03:08.258134 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:03:08.258138 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:03:08.258142 | orchestrator | ok: [testbed-manager] 2026-04-05 04:03:08.258147 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:03:08.258151 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:03:08.258155 | orchestrator | 2026-04-05 04:03:08.258159 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-05 04:03:08.258163 | orchestrator | Sunday 05 April 2026 04:01:40 +0000 (0:00:02.280) 0:00:07.072 ********** 2026-04-05 04:03:08.258167 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:03:08.258171 | orchestrator | changed: [testbed-manager] 2026-04-05 04:03:08.258175 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:03:08.258181 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:03:08.258187 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:03:08.258213 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:03:08.258219 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:03:08.258225 | orchestrator | 2026-04-05 04:03:08.258234 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-05 04:03:08.258257 | orchestrator | Sunday 05 April 2026 04:01:42 +0000 (0:00:01.648) 0:00:08.720 ********** 2026-04-05 04:03:08.258264 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:03:08.258270 | orchestrator | changed: [testbed-manager] 2026-04-05 04:03:08.258276 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:03:08.258282 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:03:08.258288 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:03:08.258294 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:03:08.258300 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:03:08.258307 | orchestrator | 2026-04-05 04:03:08.258313 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-05 04:03:08.258319 | orchestrator | Sunday 05 April 2026 04:01:58 +0000 (0:00:15.802) 0:00:24.523 ********** 2026-04-05 04:03:08.258326 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:03:08.258332 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:03:08.258339 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:03:08.258345 | orchestrator | changed: [testbed-manager] 2026-04-05 04:03:08.258352 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:03:08.258358 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:03:08.258365 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:03:08.258371 | orchestrator | 2026-04-05 04:03:08.258378 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-05 04:03:08.258384 | orchestrator | Sunday 05 April 2026 04:02:40 +0000 (0:00:41.967) 0:01:06.490 ********** 2026-04-05 04:03:08.258392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:03:08.258400 | orchestrator | 2026-04-05 04:03:08.258406 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-05 04:03:08.258413 | orchestrator | Sunday 05 April 2026 04:02:42 +0000 (0:00:01.759) 0:01:08.250 ********** 2026-04-05 04:03:08.258419 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-05 04:03:08.258426 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-05 04:03:08.258433 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-05 04:03:08.258439 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-05 04:03:08.258460 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-05 04:03:08.258464 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-05 04:03:08.258468 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-05 04:03:08.258472 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-05 04:03:08.258476 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-05 04:03:08.258480 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-05 04:03:08.258484 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-05 04:03:08.258488 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-05 04:03:08.258492 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-05 04:03:08.258496 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-05 04:03:08.258500 | orchestrator | 2026-04-05 04:03:08.258504 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-05 04:03:08.258509 | orchestrator | Sunday 05 April 2026 04:02:45 +0000 (0:00:03.785) 0:01:12.036 ********** 2026-04-05 04:03:08.258513 | orchestrator | ok: [testbed-manager] 2026-04-05 04:03:08.258517 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:03:08.258521 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:03:08.258525 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:03:08.258535 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:03:08.258539 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:03:08.258543 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:03:08.258547 | orchestrator | 2026-04-05 04:03:08.258551 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-05 04:03:08.258555 | orchestrator | Sunday 05 April 2026 04:02:47 +0000 (0:00:01.618) 0:01:13.654 ********** 2026-04-05 04:03:08.258559 | orchestrator | changed: [testbed-manager] 2026-04-05 04:03:08.258563 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:03:08.258567 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:03:08.258571 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:03:08.258575 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:03:08.258579 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:03:08.258583 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:03:08.258587 | orchestrator | 2026-04-05 04:03:08.258591 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-05 04:03:08.258595 | orchestrator | Sunday 05 April 2026 04:02:48 +0000 (0:00:01.402) 0:01:15.057 ********** 2026-04-05 04:03:08.258599 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:03:08.258603 | orchestrator | ok: [testbed-manager] 2026-04-05 04:03:08.258607 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:03:08.258611 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:03:08.258614 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:03:08.258618 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:03:08.258622 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:03:08.258626 | orchestrator | 2026-04-05 04:03:08.258630 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-05 04:03:08.258634 | orchestrator | Sunday 05 April 2026 04:02:50 +0000 (0:00:01.381) 0:01:16.438 ********** 2026-04-05 04:03:08.258638 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:03:08.258642 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:03:08.258646 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:03:08.258650 | orchestrator | ok: [testbed-manager] 2026-04-05 04:03:08.258654 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:03:08.258658 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:03:08.258662 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:03:08.258666 | orchestrator | 2026-04-05 04:03:08.258670 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-05 04:03:08.258673 | orchestrator | Sunday 05 April 2026 04:02:52 +0000 (0:00:01.745) 0:01:18.183 ********** 2026-04-05 04:03:08.258677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-05 04:03:08.258688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:03:08.258692 | orchestrator | 2026-04-05 04:03:08.258696 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-05 04:03:08.258700 | orchestrator | Sunday 05 April 2026 04:02:53 +0000 (0:00:01.519) 0:01:19.703 ********** 2026-04-05 04:03:08.258704 | orchestrator | changed: [testbed-manager] 2026-04-05 04:03:08.258708 | orchestrator | 2026-04-05 04:03:08.258712 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-05 04:03:08.258751 | orchestrator | Sunday 05 April 2026 04:02:56 +0000 (0:00:03.298) 0:01:23.001 ********** 2026-04-05 04:03:08.258758 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:03:08.258762 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:03:08.258766 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:03:08.258770 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:03:08.258774 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:03:08.258778 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:03:08.258782 | orchestrator | changed: [testbed-manager] 2026-04-05 04:03:08.258786 | orchestrator | 2026-04-05 04:03:08.258790 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:03:08.258798 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:03:08.258803 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:03:08.258807 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:03:08.258811 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:03:08.258819 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:03:08.737017 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:03:08.737119 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:03:08.737132 | orchestrator | 2026-04-05 04:03:08.737142 | orchestrator | 2026-04-05 04:03:08.737152 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:03:08.737163 | orchestrator | Sunday 05 April 2026 04:03:08 +0000 (0:00:11.404) 0:01:34.405 ********** 2026-04-05 04:03:08.737172 | orchestrator | =============================================================================== 2026-04-05 04:03:08.737181 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.97s 2026-04-05 04:03:08.737190 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.80s 2026-04-05 04:03:08.737199 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.40s 2026-04-05 04:03:08.737208 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.79s 2026-04-05 04:03:08.737217 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.30s 2026-04-05 04:03:08.737226 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.28s 2026-04-05 04:03:08.737234 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.02s 2026-04-05 04:03:08.737243 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.76s 2026-04-05 04:03:08.737252 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.75s 2026-04-05 04:03:08.737260 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.65s 2026-04-05 04:03:08.737269 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.62s 2026-04-05 04:03:08.737279 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.52s 2026-04-05 04:03:08.737288 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.43s 2026-04-05 04:03:08.737297 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.40s 2026-04-05 04:03:08.737305 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.38s 2026-04-05 04:03:08.737314 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2026-04-05 04:03:11.565170 | orchestrator | 2026-04-05 04:03:11 | INFO  | Task b366a048-6da9-4f73-b539-5eb3d41d2fa3 (prometheus) was prepared for execution. 2026-04-05 04:03:11.565275 | orchestrator | 2026-04-05 04:03:11 | INFO  | It takes a moment until task b366a048-6da9-4f73-b539-5eb3d41d2fa3 (prometheus) has been started and output is visible here. 2026-04-05 04:03:22.466558 | orchestrator | 2026-04-05 04:03:22.466648 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:03:22.466657 | orchestrator | 2026-04-05 04:03:22.466664 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:03:22.466670 | orchestrator | Sunday 05 April 2026 04:03:16 +0000 (0:00:00.323) 0:00:00.323 ********** 2026-04-05 04:03:22.466693 | orchestrator | ok: [testbed-manager] 2026-04-05 04:03:22.466700 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:03:22.466706 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:03:22.466722 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:03:22.466762 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:03:22.466770 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:03:22.466776 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:03:22.466781 | orchestrator | 2026-04-05 04:03:22.466787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:03:22.466792 | orchestrator | Sunday 05 April 2026 04:03:17 +0000 (0:00:00.929) 0:00:01.252 ********** 2026-04-05 04:03:22.466799 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-05 04:03:22.466805 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-05 04:03:22.466810 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-05 04:03:22.466815 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-05 04:03:22.466821 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-05 04:03:22.466826 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-05 04:03:22.466831 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-05 04:03:22.466837 | orchestrator | 2026-04-05 04:03:22.466842 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-05 04:03:22.466848 | orchestrator | 2026-04-05 04:03:22.466853 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 04:03:22.466858 | orchestrator | Sunday 05 April 2026 04:03:18 +0000 (0:00:01.130) 0:00:02.383 ********** 2026-04-05 04:03:22.466864 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:03:22.466871 | orchestrator | 2026-04-05 04:03:22.466877 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-05 04:03:22.466882 | orchestrator | Sunday 05 April 2026 04:03:20 +0000 (0:00:01.567) 0:00:03.950 ********** 2026-04-05 04:03:22.466890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:22.466898 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 04:03:22.466905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:22.466920 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:22.466951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:22.466961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:22.466969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:22.466976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:22.466986 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:22.466996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:22.467006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:22.467027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:23.421919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:23.422127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:23.422164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 04:03:23.422184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:23.422196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:23.422233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:23.422266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:23.422288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:23.422300 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:23.422311 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:23.422323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:23.422334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:23.422354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:23.422369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:23.422396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:29.041797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:29.041889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:29.041898 | orchestrator | 2026-04-05 04:03:29.041907 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 04:03:29.041915 | orchestrator | Sunday 05 April 2026 04:03:23 +0000 (0:00:03.328) 0:00:07.278 ********** 2026-04-05 04:03:29.041922 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:03:29.041930 | orchestrator | 2026-04-05 04:03:29.041936 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-05 04:03:29.041945 | orchestrator | Sunday 05 April 2026 04:03:25 +0000 (0:00:01.832) 0:00:09.111 ********** 2026-04-05 04:03:29.041956 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 04:03:29.041990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:29.042001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:29.042011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:29.042120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:29.042129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:29.042135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:29.042142 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:29.042155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:29.042162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:29.042169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:29.042175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:29.042191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060446 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:31.060537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:31.060564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:31.060593 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060644 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 04:03:31.060668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:31.060706 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:31.060720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:31.060782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:32.131113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:32.131213 | orchestrator | 2026-04-05 04:03:32.131224 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-05 04:03:32.131232 | orchestrator | Sunday 05 April 2026 04:03:31 +0000 (0:00:05.809) 0:00:14.920 ********** 2026-04-05 04:03:32.131241 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 04:03:32.131249 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:32.131257 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.131300 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 04:03:32.131321 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.131329 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:03:32.131336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:32.131349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.131355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.131362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.131369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.131376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:32.131386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.131397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.731554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.731675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.731700 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:03:32.731721 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:03:32.731804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:32.731821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.731832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.731842 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:03:32.731872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:32.731890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.731968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.731989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.732007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:32.732023 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:03:32.732040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:32.732052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.732063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:03:32.732072 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:03:32.732089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:32.732117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:33.598888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:03:33.598976 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:03:33.598987 | orchestrator | 2026-04-05 04:03:33.598996 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-05 04:03:33.599005 | orchestrator | Sunday 05 April 2026 04:03:32 +0000 (0:00:01.671) 0:00:16.592 ********** 2026-04-05 04:03:33.599013 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-05 04:03:33.599021 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:33.599031 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:33.599054 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-05 04:03:33.599109 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:33.599128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:33.599140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:33.599152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:33.599163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:33.599173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:33.599191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:33.599212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:33.599233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:35.049906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:35.049988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:35.049998 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:03:35.050007 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:03:35.050063 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:03:35.050072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:35.050081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:35.050090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:35.050133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:35.050140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:03:35.050145 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:03:35.050205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:35.050211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:35.050216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:03:35.050221 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:03:35.050226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:35.050231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:35.050245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:03:35.050250 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:03:35.050255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:03:35.050265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:03:38.855583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:03:38.855696 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:03:38.855713 | orchestrator | 2026-04-05 04:03:38.855727 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-05 04:03:38.855819 | orchestrator | Sunday 05 April 2026 04:03:35 +0000 (0:00:02.300) 0:00:18.893 ********** 2026-04-05 04:03:38.855836 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 04:03:38.855851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:38.855888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:38.855916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:38.855928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:38.855958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:38.855970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:38.855981 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:03:38.855993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:38.856012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:38.856025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:38.856043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:38.856054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:38.856074 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:41.699694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:41.699719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:41.699865 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 04:03:41.699889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:03:41.699978 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:41.699997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:41.700016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:41.700051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:03:46.301268 | orchestrator | 2026-04-05 04:03:46.301363 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-05 04:03:46.301378 | orchestrator | Sunday 05 April 2026 04:03:41 +0000 (0:00:06.661) 0:00:25.554 ********** 2026-04-05 04:03:46.301388 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:03:46.301400 | orchestrator | 2026-04-05 04:03:46.301410 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-05 04:03:46.301445 | orchestrator | Sunday 05 April 2026 04:03:42 +0000 (0:00:00.946) 0:00:26.500 ********** 2026-04-05 04:03:46.301458 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1361523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8671048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301472 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1361523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8671048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301482 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1361523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8671048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301507 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1361539, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.871813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301519 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1361523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8671048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:03:46.301530 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1361523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8671048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301557 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1361539, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.871813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301577 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1361539, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.871813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301588 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1361523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8671048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301598 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1361539, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.871813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301614 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1361523, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8671048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301624 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1361514, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8649197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301635 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1361514, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8649197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:46.301652 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1361539, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.871813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1361533, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8692536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1361514, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8649197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221323 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1361514, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8649197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221352 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1361539, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.871813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:03:48.221374 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1361539, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.871813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221387 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1361514, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8649197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221422 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1361512, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8607922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221455 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1361533, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8692536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221541 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1361533, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8692536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221555 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1361533, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8692536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221573 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1361512, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8607922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221586 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1361533, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8692536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221603 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1361514, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8649197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221638 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1361512, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8607922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:48.221673 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1361524, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.867482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849137 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1361524, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.867482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849245 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1361512, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8607922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849277 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1361512, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8607922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849290 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1361524, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.867482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849301 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1361532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8690348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849334 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1361533, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8692536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849346 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1361524, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.867482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849376 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1361532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8690348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849389 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1361514, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8649197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:03:49.849407 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1361524, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.867482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849418 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1361526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8678508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849430 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1361532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8690348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849450 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1361512, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8607922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849462 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1361532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8690348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:49.849481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1361526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8678508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753197 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1361519, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8654993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753344 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1361526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8678508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753366 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1361532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8690348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753402 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1361526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8678508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753414 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1361519, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8654993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753426 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1361526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8678508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753438 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361538, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753469 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1361519, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8654993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753487 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361538, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753499 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1361524, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.867482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753519 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1361533, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8692536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:03:51.753531 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1361519, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8654993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753543 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1361519, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8654993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753555 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361538, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:51.753575 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361510, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148480 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361510, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148559 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361538, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148580 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361538, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148584 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1361532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8690348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148588 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361510, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148593 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361510, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148597 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1361526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8678508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148656 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1361549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8737924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148667 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1361519, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8654993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148679 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1361512, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8607922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:03:54.148683 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1361549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8737924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148687 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1361549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8737924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148691 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361510, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148701 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1361549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8737924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:54.148713 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1361535, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1361535, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508198 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361538, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508207 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1361535, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508214 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1361549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8737924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508221 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361513, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.86234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508229 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361513, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.86234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508259 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1361524, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.867482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:03:56.508301 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361510, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508310 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1361535, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508316 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361513, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.86234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508323 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1361511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508330 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1361535, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508336 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361513, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.86234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508351 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1361549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8737924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:56.508363 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1361511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127109 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1361531, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8688827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127199 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1361511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127216 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1361535, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1361511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127235 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1361531, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8688827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127261 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1361530, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8683698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127280 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361513, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.86234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127304 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361513, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.86234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127313 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1361531, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8688827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127321 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1361531, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8688827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127330 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1361511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127338 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1361548, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8733501, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127352 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:03:58.127362 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1361530, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8683698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127375 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1361511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:03:58.127402 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1361530, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8683698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754067 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1361530, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8683698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754174 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1361531, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8688827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1361548, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8733501, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754198 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:05.754224 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1361530, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8683698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754230 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1361532, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8690348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:05.754246 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1361531, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8688827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754252 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1361548, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8733501, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754270 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:05.754276 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1361548, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8733501, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754282 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:05.754287 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1361548, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8733501, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754292 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:05.754298 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1361530, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8683698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754308 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1361548, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8733501, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-05 04:04:05.754313 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:05.754322 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1361526, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8678508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:05.754328 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1361519, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8654993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:05.754339 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361538, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730627 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361510, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730731 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1361549, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8737924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730773 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1361535, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8711622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730894 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1361513, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.86234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730925 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1361511, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.859792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730943 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1361531, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8688827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730960 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1361530, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8683698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.730999 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1361548, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8733501, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-05 04:04:33.731016 | orchestrator | 2026-04-05 04:04:33.731035 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-05 04:04:33.731054 | orchestrator | Sunday 05 April 2026 04:04:12 +0000 (0:00:29.704) 0:00:56.205 ********** 2026-04-05 04:04:33.731070 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:04:33.731089 | orchestrator | 2026-04-05 04:04:33.731106 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-05 04:04:33.731137 | orchestrator | Sunday 05 April 2026 04:04:13 +0000 (0:00:00.780) 0:00:56.986 ********** 2026-04-05 04:04:33.731153 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:33.731168 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731181 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-05 04:04:33.731193 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731205 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-05 04:04:33.731216 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:33.731228 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731240 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-05 04:04:33.731301 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731313 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-05 04:04:33.731324 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:33.731334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731344 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-05 04:04:33.731354 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731363 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-05 04:04:33.731373 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:33.731383 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731392 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-05 04:04:33.731402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731411 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-05 04:04:33.731421 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:33.731431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731440 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-05 04:04:33.731450 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731460 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-05 04:04:33.731469 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:33.731479 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731489 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-05 04:04:33.731498 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731517 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-05 04:04:33.731526 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:33.731536 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731546 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-05 04:04:33.731556 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 04:04:33.731565 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-05 04:04:33.731575 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:04:33.731585 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 04:04:33.731595 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 04:04:33.731604 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:04:33.731614 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 04:04:33.731624 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 04:04:33.731633 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 04:04:33.731643 | orchestrator | 2026-04-05 04:04:33.731653 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-05 04:04:33.731663 | orchestrator | Sunday 05 April 2026 04:04:15 +0000 (0:00:02.044) 0:00:59.030 ********** 2026-04-05 04:04:33.731680 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 04:04:33.731690 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 04:04:33.731701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:33.731711 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 04:04:33.731720 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:33.731730 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:33.731750 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 04:04:52.429370 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.429508 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 04:04:52.429569 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.429587 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 04:04:52.429601 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.429615 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-05 04:04:52.429629 | orchestrator | 2026-04-05 04:04:52.429644 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-05 04:04:52.429660 | orchestrator | Sunday 05 April 2026 04:04:33 +0000 (0:00:18.565) 0:01:17.596 ********** 2026-04-05 04:04:52.429676 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 04:04:52.429692 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:52.429707 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 04:04:52.429722 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:52.429737 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 04:04:52.429746 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:52.429755 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 04:04:52.429764 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.429772 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 04:04:52.429781 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.429849 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 04:04:52.429861 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.429870 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-05 04:04:52.429879 | orchestrator | 2026-04-05 04:04:52.429890 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-05 04:04:52.429901 | orchestrator | Sunday 05 April 2026 04:04:36 +0000 (0:00:02.911) 0:01:20.507 ********** 2026-04-05 04:04:52.429912 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 04:04:52.429929 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:52.429943 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 04:04:52.429957 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:52.429970 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 04:04:52.429984 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:52.429997 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 04:04:52.430091 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.430111 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-05 04:04:52.430124 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 04:04:52.430139 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.430170 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 04:04:52.430187 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.430200 | orchestrator | 2026-04-05 04:04:52.430216 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-05 04:04:52.430233 | orchestrator | Sunday 05 April 2026 04:04:38 +0000 (0:00:01.972) 0:01:22.480 ********** 2026-04-05 04:04:52.430249 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:04:52.430264 | orchestrator | 2026-04-05 04:04:52.430280 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-05 04:04:52.430296 | orchestrator | Sunday 05 April 2026 04:04:39 +0000 (0:00:00.789) 0:01:23.270 ********** 2026-04-05 04:04:52.430311 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:04:52.430320 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:52.430329 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:52.430338 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:52.430347 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.430356 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.430364 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.430373 | orchestrator | 2026-04-05 04:04:52.430382 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-05 04:04:52.430390 | orchestrator | Sunday 05 April 2026 04:04:40 +0000 (0:00:00.811) 0:01:24.081 ********** 2026-04-05 04:04:52.430399 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:04:52.430407 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.430416 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.430424 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.430433 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:04:52.430441 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:04:52.430450 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:04:52.430459 | orchestrator | 2026-04-05 04:04:52.430468 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-05 04:04:52.430499 | orchestrator | Sunday 05 April 2026 04:04:42 +0000 (0:00:02.537) 0:01:26.619 ********** 2026-04-05 04:04:52.430509 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 04:04:52.430518 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 04:04:52.430526 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:04:52.430535 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:52.430544 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 04:04:52.430553 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 04:04:52.430562 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:52.430570 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:52.430579 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 04:04:52.430588 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.430597 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 04:04:52.430606 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.430614 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 04:04:52.430623 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.430642 | orchestrator | 2026-04-05 04:04:52.430651 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-05 04:04:52.430660 | orchestrator | Sunday 05 April 2026 04:04:44 +0000 (0:00:01.745) 0:01:28.364 ********** 2026-04-05 04:04:52.430669 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 04:04:52.430678 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 04:04:52.430687 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:52.430696 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:52.430704 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 04:04:52.430713 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:52.430722 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 04:04:52.430731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.430739 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 04:04:52.430748 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.430757 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 04:04:52.430765 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.430774 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-05 04:04:52.430782 | orchestrator | 2026-04-05 04:04:52.430814 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-05 04:04:52.430830 | orchestrator | Sunday 05 April 2026 04:04:46 +0000 (0:00:01.641) 0:01:30.006 ********** 2026-04-05 04:04:52.430840 | orchestrator | [WARNING]: Skipped 2026-04-05 04:04:52.430851 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-05 04:04:52.430860 | orchestrator | due to this access issue: 2026-04-05 04:04:52.430868 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-05 04:04:52.430877 | orchestrator | not a directory 2026-04-05 04:04:52.430886 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:04:52.430895 | orchestrator | 2026-04-05 04:04:52.430909 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-05 04:04:52.430923 | orchestrator | Sunday 05 April 2026 04:04:47 +0000 (0:00:01.385) 0:01:31.392 ********** 2026-04-05 04:04:52.430938 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:04:52.430954 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:52.430968 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:52.430980 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:52.430990 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.430998 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.431007 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.431015 | orchestrator | 2026-04-05 04:04:52.431024 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-05 04:04:52.431033 | orchestrator | Sunday 05 April 2026 04:04:48 +0000 (0:00:01.020) 0:01:32.412 ********** 2026-04-05 04:04:52.431042 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:04:52.431051 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:04:52.431059 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:04:52.431068 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:04:52.431076 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:04:52.431085 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:04:52.431093 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:04:52.431102 | orchestrator | 2026-04-05 04:04:52.431110 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-05 04:04:52.431119 | orchestrator | Sunday 05 April 2026 04:04:49 +0000 (0:00:01.004) 0:01:33.416 ********** 2026-04-05 04:04:52.431147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:04:54.042500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:04:54.042605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:04:54.042620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:04:54.042634 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-05 04:04:54.042678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:04:54.042700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:04:54.042755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:54.042789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:54.042858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:04:54.042871 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 04:04:54.042883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:54.042894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:04:54.042913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:04:54.042934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:04:54.042947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:54.042967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:56.296434 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:04:56.296540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:04:56.296558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:56.296589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 04:04:56.296604 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-05 04:04:56.296642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:04:56.296674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:04:56.296687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 04:04:56.296698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:56.296710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:56.296727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:56.296747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:04:56.296759 | orchestrator | 2026-04-05 04:04:56.296772 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-05 04:04:56.296785 | orchestrator | Sunday 05 April 2026 04:04:54 +0000 (0:00:04.497) 0:01:37.913 ********** 2026-04-05 04:04:56.296833 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 04:04:56.296846 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:04:56.296857 | orchestrator | 2026-04-05 04:04:56.296869 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 04:04:56.296880 | orchestrator | Sunday 05 April 2026 04:04:55 +0000 (0:00:01.449) 0:01:39.363 ********** 2026-04-05 04:04:56.296892 | orchestrator | 2026-04-05 04:04:56.296903 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 04:04:56.296914 | orchestrator | Sunday 05 April 2026 04:04:55 +0000 (0:00:00.279) 0:01:39.642 ********** 2026-04-05 04:04:56.296925 | orchestrator | 2026-04-05 04:04:56.296936 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 04:04:56.296947 | orchestrator | Sunday 05 April 2026 04:04:55 +0000 (0:00:00.078) 0:01:39.721 ********** 2026-04-05 04:04:56.296960 | orchestrator | 2026-04-05 04:04:56.296973 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 04:04:56.296986 | orchestrator | Sunday 05 April 2026 04:04:55 +0000 (0:00:00.089) 0:01:39.810 ********** 2026-04-05 04:04:56.296998 | orchestrator | 2026-04-05 04:04:56.297011 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 04:04:56.297024 | orchestrator | Sunday 05 April 2026 04:04:56 +0000 (0:00:00.075) 0:01:39.885 ********** 2026-04-05 04:04:56.297036 | orchestrator | 2026-04-05 04:04:56.297049 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 04:04:56.297063 | orchestrator | Sunday 05 April 2026 04:04:56 +0000 (0:00:00.080) 0:01:39.966 ********** 2026-04-05 04:04:56.297075 | orchestrator | 2026-04-05 04:04:56.297096 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 04:06:46.495336 | orchestrator | Sunday 05 April 2026 04:04:56 +0000 (0:00:00.081) 0:01:40.047 ********** 2026-04-05 04:06:46.495452 | orchestrator | 2026-04-05 04:06:46.495464 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-05 04:06:46.495472 | orchestrator | Sunday 05 April 2026 04:04:56 +0000 (0:00:00.099) 0:01:40.146 ********** 2026-04-05 04:06:46.495480 | orchestrator | changed: [testbed-manager] 2026-04-05 04:06:46.495489 | orchestrator | 2026-04-05 04:06:46.495497 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-05 04:06:46.495505 | orchestrator | Sunday 05 April 2026 04:05:17 +0000 (0:00:21.217) 0:02:01.364 ********** 2026-04-05 04:06:46.495513 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:06:46.495521 | orchestrator | changed: [testbed-manager] 2026-04-05 04:06:46.495528 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:06:46.495535 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:06:46.495542 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:06:46.495550 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:06:46.495558 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:06:46.495565 | orchestrator | 2026-04-05 04:06:46.495574 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-05 04:06:46.495582 | orchestrator | Sunday 05 April 2026 04:05:31 +0000 (0:00:14.310) 0:02:15.675 ********** 2026-04-05 04:06:46.495614 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:06:46.495622 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:06:46.495630 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:06:46.495637 | orchestrator | 2026-04-05 04:06:46.495645 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-05 04:06:46.495654 | orchestrator | Sunday 05 April 2026 04:05:42 +0000 (0:00:11.013) 0:02:26.688 ********** 2026-04-05 04:06:46.495662 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:06:46.495670 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:06:46.495678 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:06:46.495686 | orchestrator | 2026-04-05 04:06:46.495694 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-05 04:06:46.495702 | orchestrator | Sunday 05 April 2026 04:05:49 +0000 (0:00:06.197) 0:02:32.886 ********** 2026-04-05 04:06:46.495709 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:06:46.495716 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:06:46.495723 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:06:46.495730 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:06:46.495737 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:06:46.495744 | orchestrator | changed: [testbed-manager] 2026-04-05 04:06:46.495751 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:06:46.495759 | orchestrator | 2026-04-05 04:06:46.495766 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-05 04:06:46.495773 | orchestrator | Sunday 05 April 2026 04:06:03 +0000 (0:00:14.100) 0:02:46.987 ********** 2026-04-05 04:06:46.495780 | orchestrator | changed: [testbed-manager] 2026-04-05 04:06:46.495788 | orchestrator | 2026-04-05 04:06:46.495795 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-05 04:06:46.495816 | orchestrator | Sunday 05 April 2026 04:06:18 +0000 (0:00:15.605) 0:03:02.592 ********** 2026-04-05 04:06:46.495823 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:06:46.495831 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:06:46.495839 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:06:46.495865 | orchestrator | 2026-04-05 04:06:46.495875 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-05 04:06:46.495882 | orchestrator | Sunday 05 April 2026 04:06:29 +0000 (0:00:10.800) 0:03:13.392 ********** 2026-04-05 04:06:46.495889 | orchestrator | changed: [testbed-manager] 2026-04-05 04:06:46.495897 | orchestrator | 2026-04-05 04:06:46.495904 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-05 04:06:46.495912 | orchestrator | Sunday 05 April 2026 04:06:35 +0000 (0:00:05.793) 0:03:19.186 ********** 2026-04-05 04:06:46.495921 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:06:46.495929 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:06:46.495938 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:06:46.495946 | orchestrator | 2026-04-05 04:06:46.495954 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:06:46.495962 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 04:06:46.495973 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 04:06:46.495982 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 04:06:46.495990 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 04:06:46.495998 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 04:06:46.496005 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 04:06:46.496022 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 04:06:46.496031 | orchestrator | 2026-04-05 04:06:46.496039 | orchestrator | 2026-04-05 04:06:46.496048 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:06:46.496056 | orchestrator | Sunday 05 April 2026 04:06:45 +0000 (0:00:10.522) 0:03:29.708 ********** 2026-04-05 04:06:46.496065 | orchestrator | =============================================================================== 2026-04-05 04:06:46.496086 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 29.70s 2026-04-05 04:06:46.496091 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.22s 2026-04-05 04:06:46.496097 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.57s 2026-04-05 04:06:46.496102 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.61s 2026-04-05 04:06:46.496108 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.31s 2026-04-05 04:06:46.496113 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.10s 2026-04-05 04:06:46.496119 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.01s 2026-04-05 04:06:46.496124 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.80s 2026-04-05 04:06:46.496129 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.52s 2026-04-05 04:06:46.496134 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.66s 2026-04-05 04:06:46.496139 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.20s 2026-04-05 04:06:46.496145 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.81s 2026-04-05 04:06:46.496150 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.79s 2026-04-05 04:06:46.496156 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.50s 2026-04-05 04:06:46.496161 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.33s 2026-04-05 04:06:46.496166 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.91s 2026-04-05 04:06:46.496172 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.54s 2026-04-05 04:06:46.496177 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.30s 2026-04-05 04:06:46.496182 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.04s 2026-04-05 04:06:46.496188 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.97s 2026-04-05 04:06:49.107248 | orchestrator | 2026-04-05 04:06:49 | INFO  | Task 7718bcde-708d-40d0-994f-bbee24c73af5 (grafana) was prepared for execution. 2026-04-05 04:06:49.107332 | orchestrator | 2026-04-05 04:06:49 | INFO  | It takes a moment until task 7718bcde-708d-40d0-994f-bbee24c73af5 (grafana) has been started and output is visible here. 2026-04-05 04:07:00.095964 | orchestrator | 2026-04-05 04:07:00.096066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:07:00.096096 | orchestrator | 2026-04-05 04:07:00.096107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:07:00.096117 | orchestrator | Sunday 05 April 2026 04:06:53 +0000 (0:00:00.326) 0:00:00.326 ********** 2026-04-05 04:07:00.096127 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:07:00.096137 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:07:00.096146 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:07:00.096154 | orchestrator | 2026-04-05 04:07:00.096163 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:07:00.096172 | orchestrator | Sunday 05 April 2026 04:06:54 +0000 (0:00:00.362) 0:00:00.689 ********** 2026-04-05 04:07:00.096181 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-05 04:07:00.096212 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-05 04:07:00.096221 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-05 04:07:00.096230 | orchestrator | 2026-04-05 04:07:00.096239 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-05 04:07:00.096247 | orchestrator | 2026-04-05 04:07:00.096256 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 04:07:00.096265 | orchestrator | Sunday 05 April 2026 04:06:54 +0000 (0:00:00.495) 0:00:01.184 ********** 2026-04-05 04:07:00.096274 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:07:00.096283 | orchestrator | 2026-04-05 04:07:00.096292 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-05 04:07:00.096301 | orchestrator | Sunday 05 April 2026 04:06:55 +0000 (0:00:00.643) 0:00:01.828 ********** 2026-04-05 04:07:00.096313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:00.096326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:00.096335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:00.096344 | orchestrator | 2026-04-05 04:07:00.096353 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-05 04:07:00.096362 | orchestrator | Sunday 05 April 2026 04:06:56 +0000 (0:00:00.974) 0:00:02.802 ********** 2026-04-05 04:07:00.096371 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-05 04:07:00.096380 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-05 04:07:00.096389 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:07:00.096398 | orchestrator | 2026-04-05 04:07:00.096406 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 04:07:00.096415 | orchestrator | Sunday 05 April 2026 04:06:57 +0000 (0:00:00.898) 0:00:03.701 ********** 2026-04-05 04:07:00.096424 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:07:00.096440 | orchestrator | 2026-04-05 04:07:00.096452 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-05 04:07:00.096462 | orchestrator | Sunday 05 April 2026 04:06:57 +0000 (0:00:00.650) 0:00:04.351 ********** 2026-04-05 04:07:00.096497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:00.096511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:00.096523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:00.096534 | orchestrator | 2026-04-05 04:07:00.096546 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-05 04:07:00.096557 | orchestrator | Sunday 05 April 2026 04:06:59 +0000 (0:00:01.439) 0:00:05.791 ********** 2026-04-05 04:07:00.096568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 04:07:00.096580 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:07:00.096592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 04:07:00.096610 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:07:00.096636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 04:07:07.449458 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:07:07.449567 | orchestrator | 2026-04-05 04:07:07.449583 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-05 04:07:07.449596 | orchestrator | Sunday 05 April 2026 04:07:00 +0000 (0:00:00.721) 0:00:06.513 ********** 2026-04-05 04:07:07.449609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 04:07:07.449623 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:07:07.449634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 04:07:07.449646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:07:07.449656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-05 04:07:07.449666 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:07:07.449675 | orchestrator | 2026-04-05 04:07:07.449686 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-05 04:07:07.449697 | orchestrator | Sunday 05 April 2026 04:07:00 +0000 (0:00:00.731) 0:00:07.244 ********** 2026-04-05 04:07:07.449709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:07.449750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:07.449799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:07.449813 | orchestrator | 2026-04-05 04:07:07.449822 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-05 04:07:07.449832 | orchestrator | Sunday 05 April 2026 04:07:02 +0000 (0:00:01.326) 0:00:08.571 ********** 2026-04-05 04:07:07.449842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:07.449852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:07.449890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:07:07.449912 | orchestrator | 2026-04-05 04:07:07.449923 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-05 04:07:07.449935 | orchestrator | Sunday 05 April 2026 04:07:03 +0000 (0:00:01.766) 0:00:10.338 ********** 2026-04-05 04:07:07.449946 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:07:07.449958 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:07:07.449970 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:07:07.449980 | orchestrator | 2026-04-05 04:07:07.449990 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-05 04:07:07.450002 | orchestrator | Sunday 05 April 2026 04:07:04 +0000 (0:00:00.346) 0:00:10.685 ********** 2026-04-05 04:07:07.450069 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 04:07:07.450086 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 04:07:07.450097 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 04:07:07.450109 | orchestrator | 2026-04-05 04:07:07.450120 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-05 04:07:07.450131 | orchestrator | Sunday 05 April 2026 04:07:05 +0000 (0:00:01.303) 0:00:11.988 ********** 2026-04-05 04:07:07.450143 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 04:07:07.450155 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 04:07:07.450175 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 04:07:07.450186 | orchestrator | 2026-04-05 04:07:07.450198 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-05 04:07:07.450220 | orchestrator | Sunday 05 April 2026 04:07:07 +0000 (0:00:01.867) 0:00:13.855 ********** 2026-04-05 04:07:13.902531 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:07:13.902656 | orchestrator | 2026-04-05 04:07:13.902675 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-05 04:07:13.902685 | orchestrator | Sunday 05 April 2026 04:07:08 +0000 (0:00:00.783) 0:00:14.639 ********** 2026-04-05 04:07:13.902694 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-05 04:07:13.902703 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-05 04:07:13.902712 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:07:13.902721 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:07:13.902729 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:07:13.902737 | orchestrator | 2026-04-05 04:07:13.902745 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-05 04:07:13.902753 | orchestrator | Sunday 05 April 2026 04:07:08 +0000 (0:00:00.754) 0:00:15.394 ********** 2026-04-05 04:07:13.902761 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:07:13.902769 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:07:13.902777 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:07:13.902785 | orchestrator | 2026-04-05 04:07:13.902793 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-05 04:07:13.902801 | orchestrator | Sunday 05 April 2026 04:07:09 +0000 (0:00:00.374) 0:00:15.768 ********** 2026-04-05 04:07:13.902812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1361437, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7907908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.902847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1361437, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7907908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.902856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1361437, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7907908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.902914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1361465, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.807791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.902956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1361465, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.807791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.902966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1361465, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.807791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.902974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1361442, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7937908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.902992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1361442, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7937908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.903001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1361442, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7937908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.903009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1361466, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.809791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.903022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1361466, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.809791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:13.903038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1361466, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.809791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1361449, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8004074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1361449, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8004074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1361449, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8004074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1361459, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8063002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1361459, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8063002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1361459, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8063002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1361436, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7898397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1361436, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7898397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1361436, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7898397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1361439, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7917907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1361439, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7917907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1361439, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7917907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:18.194721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1361443, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7947907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1361443, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7947907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1361443, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7947907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1361452, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8013046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1361452, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8013046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1361452, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8013046, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1361464, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.807629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1361464, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.807629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1361464, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.807629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1361441, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7937908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1361441, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7937908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1361441, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7937908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1361458, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.80404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:22.324499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1361458, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.80404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.394799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1361458, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.80404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.394939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1361451, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8008173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.394959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1361451, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8008173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.394974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1361451, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8008173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1361448, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.799256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1361448, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.799256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1361448, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.799256, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1361446, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7977908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1361446, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7977908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1361446, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7977908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1361454, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.80404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1361454, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.80404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:26.395124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1361454, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.80404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.748938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1361445, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7967908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1361445, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7967908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1361445, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.7967908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1361462, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8063002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1361462, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8063002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1361462, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8063002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1361506, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.85689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1361506, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.85689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1361506, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.85689, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1361474, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8217914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1361474, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8217914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1361474, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8217914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:30.749271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1361471, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.813443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1361471, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.813443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1361471, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.813443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1361481, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8307915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1361481, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8307915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1361481, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8307915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1361468, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.811412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1361468, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.811412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1361468, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.811412, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1361493, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8437917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1361493, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8437917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1361493, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8437917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1361482, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8367915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:34.728402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1361482, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8367915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1361482, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8367915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1361494, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.844792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1361494, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.844792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1361494, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.844792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1361504, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8547919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1361504, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8547919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1361504, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8547919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1361492, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8417916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1361492, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8417916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1361492, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8417916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1361477, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8277915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1361477, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8277915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:39.282633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1361477, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8277915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1361473, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8177912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1361473, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8177912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1361473, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8177912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1361475, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8227913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1361475, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8227913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1361475, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8227913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1361472, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8167913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1361472, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8167913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1361472, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8167913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1361479, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8287914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1361479, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8287914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1361479, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8287914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:43.267660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1361501, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8547919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.328833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1361501, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8547919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1361501, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8547919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1361497, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8517919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1361497, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8517919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1361497, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8517919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1361469, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8120441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1361469, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8120441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1361469, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8120441, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1361470, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8124692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1361470, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8124692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1361470, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8124692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1361485, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8417916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:07:47.329202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1361485, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8417916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:09:31.441475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1361485, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8417916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:09:31.441597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1361495, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8457918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:09:31.441611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1361495, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8457918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:09:31.441620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1361495, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775354442.8457918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-05 04:09:31.441629 | orchestrator | 2026-04-05 04:09:31.441638 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-05 04:09:31.441647 | orchestrator | Sunday 05 April 2026 04:07:48 +0000 (0:00:39.359) 0:00:55.128 ********** 2026-04-05 04:09:31.441656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:09:31.441699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:09:31.441708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-05 04:09:31.441716 | orchestrator | 2026-04-05 04:09:31.441723 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-05 04:09:31.441731 | orchestrator | Sunday 05 April 2026 04:07:49 +0000 (0:00:01.146) 0:00:56.274 ********** 2026-04-05 04:09:31.441738 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:09:31.441748 | orchestrator | 2026-04-05 04:09:31.441755 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-05 04:09:31.441763 | orchestrator | Sunday 05 April 2026 04:07:52 +0000 (0:00:02.483) 0:00:58.757 ********** 2026-04-05 04:09:31.441770 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:09:31.441777 | orchestrator | 2026-04-05 04:09:31.441789 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 04:09:31.441796 | orchestrator | Sunday 05 April 2026 04:07:54 +0000 (0:00:02.405) 0:01:01.162 ********** 2026-04-05 04:09:31.441803 | orchestrator | 2026-04-05 04:09:31.441811 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 04:09:31.441818 | orchestrator | Sunday 05 April 2026 04:07:54 +0000 (0:00:00.106) 0:01:01.269 ********** 2026-04-05 04:09:31.441825 | orchestrator | 2026-04-05 04:09:31.441832 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 04:09:31.441839 | orchestrator | Sunday 05 April 2026 04:07:54 +0000 (0:00:00.088) 0:01:01.357 ********** 2026-04-05 04:09:31.441846 | orchestrator | 2026-04-05 04:09:31.441854 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-05 04:09:31.441861 | orchestrator | Sunday 05 April 2026 04:07:55 +0000 (0:00:00.085) 0:01:01.443 ********** 2026-04-05 04:09:31.441869 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:09:31.441876 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:09:31.441937 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:09:31.441946 | orchestrator | 2026-04-05 04:09:31.441954 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-05 04:09:31.441961 | orchestrator | Sunday 05 April 2026 04:07:57 +0000 (0:00:02.457) 0:01:03.901 ********** 2026-04-05 04:09:31.441968 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:09:31.441976 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:09:31.441983 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-05 04:09:31.441991 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-05 04:09:31.442007 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-04-05 04:09:31.442063 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-04-05 04:09:31.442073 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:09:31.442083 | orchestrator | 2026-04-05 04:09:31.442092 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-05 04:09:31.442102 | orchestrator | Sunday 05 April 2026 04:08:48 +0000 (0:00:51.088) 0:01:54.990 ********** 2026-04-05 04:09:31.442111 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:09:31.442119 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:09:31.442129 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:09:31.442137 | orchestrator | 2026-04-05 04:09:31.442146 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-05 04:09:31.442155 | orchestrator | Sunday 05 April 2026 04:09:26 +0000 (0:00:37.568) 0:02:32.559 ********** 2026-04-05 04:09:31.442163 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:09:31.442171 | orchestrator | 2026-04-05 04:09:31.442180 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-05 04:09:31.442189 | orchestrator | Sunday 05 April 2026 04:09:28 +0000 (0:00:02.215) 0:02:34.774 ********** 2026-04-05 04:09:31.442197 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:09:31.442205 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:09:31.442214 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:09:31.442222 | orchestrator | 2026-04-05 04:09:31.442231 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-05 04:09:31.442240 | orchestrator | Sunday 05 April 2026 04:09:28 +0000 (0:00:00.331) 0:02:35.105 ********** 2026-04-05 04:09:31.442250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-05 04:09:31.442268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-05 04:09:32.178637 | orchestrator | 2026-04-05 04:09:32.178747 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-05 04:09:32.178763 | orchestrator | Sunday 05 April 2026 04:09:31 +0000 (0:00:02.740) 0:02:37.846 ********** 2026-04-05 04:09:32.178774 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:09:32.178785 | orchestrator | 2026-04-05 04:09:32.178794 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:09:32.178805 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 04:09:32.178815 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 04:09:32.178824 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 04:09:32.178833 | orchestrator | 2026-04-05 04:09:32.178843 | orchestrator | 2026-04-05 04:09:32.178858 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:09:32.178873 | orchestrator | Sunday 05 April 2026 04:09:31 +0000 (0:00:00.312) 0:02:38.159 ********** 2026-04-05 04:09:32.178888 | orchestrator | =============================================================================== 2026-04-05 04:09:32.178978 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.09s 2026-04-05 04:09:32.178997 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.36s 2026-04-05 04:09:32.179029 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.57s 2026-04-05 04:09:32.179039 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.74s 2026-04-05 04:09:32.179048 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.48s 2026-04-05 04:09:32.179057 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.46s 2026-04-05 04:09:32.179066 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.41s 2026-04-05 04:09:32.179081 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.22s 2026-04-05 04:09:32.179096 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.87s 2026-04-05 04:09:32.179112 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.77s 2026-04-05 04:09:32.179126 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.44s 2026-04-05 04:09:32.179141 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-04-05 04:09:32.179156 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2026-04-05 04:09:32.179173 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.15s 2026-04-05 04:09:32.179188 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.97s 2026-04-05 04:09:32.179203 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.90s 2026-04-05 04:09:32.179218 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.78s 2026-04-05 04:09:32.179232 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2026-04-05 04:09:32.179246 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.73s 2026-04-05 04:09:32.179261 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.72s 2026-04-05 04:09:32.578237 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-04-05 04:09:32.586165 | orchestrator | + set -e 2026-04-05 04:09:32.586232 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:09:32.586868 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:09:32.586885 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:09:32.586891 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:09:32.586897 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:09:32.586928 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 04:09:32.587743 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 04:09:32.587763 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 04:09:32.587768 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 04:09:32.587773 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 04:09:32.587778 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 04:09:32.587783 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 04:09:32.587788 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:09:32.587793 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:09:32.587799 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 04:09:32.587804 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 04:09:32.587809 | orchestrator | ++ export ARA=false 2026-04-05 04:09:32.587813 | orchestrator | ++ ARA=false 2026-04-05 04:09:32.587818 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 04:09:32.587823 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 04:09:32.587827 | orchestrator | ++ export TEMPEST=false 2026-04-05 04:09:32.587832 | orchestrator | ++ TEMPEST=false 2026-04-05 04:09:32.587836 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 04:09:32.587841 | orchestrator | ++ IS_ZUUL=true 2026-04-05 04:09:32.587846 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:09:32.587851 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:09:32.587855 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 04:09:32.587860 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 04:09:32.587864 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 04:09:32.587869 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 04:09:32.587874 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 04:09:32.587878 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 04:09:32.587884 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 04:09:32.587891 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 04:09:32.589238 | orchestrator | ++ semver 9.5.0 8.0.0 2026-04-05 04:09:32.653374 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 04:09:32.653452 | orchestrator | + osism apply clusterapi 2026-04-05 04:09:34.877086 | orchestrator | 2026-04-05 04:09:34 | INFO  | Task 3e6c9276-ff88-4827-bf1b-4430b853d1d5 (clusterapi) was prepared for execution. 2026-04-05 04:09:34.877172 | orchestrator | 2026-04-05 04:09:34 | INFO  | It takes a moment until task 3e6c9276-ff88-4827-bf1b-4430b853d1d5 (clusterapi) has been started and output is visible here. 2026-04-05 04:10:52.787545 | orchestrator | 2026-04-05 04:10:52.787638 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-05 04:10:52.787650 | orchestrator | 2026-04-05 04:10:52.787656 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-05 04:10:52.787663 | orchestrator | Sunday 05 April 2026 04:09:39 +0000 (0:00:00.204) 0:00:00.204 ********** 2026-04-05 04:10:52.787671 | orchestrator | included: cert_manager for testbed-manager 2026-04-05 04:10:52.787677 | orchestrator | 2026-04-05 04:10:52.787684 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-05 04:10:52.787690 | orchestrator | Sunday 05 April 2026 04:09:39 +0000 (0:00:00.258) 0:00:00.462 ********** 2026-04-05 04:10:52.787697 | orchestrator | changed: [testbed-manager] 2026-04-05 04:10:52.787704 | orchestrator | 2026-04-05 04:10:52.787710 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-05 04:10:52.787717 | orchestrator | Sunday 05 April 2026 04:09:45 +0000 (0:00:05.770) 0:00:06.233 ********** 2026-04-05 04:10:52.787723 | orchestrator | changed: [testbed-manager] 2026-04-05 04:10:52.787729 | orchestrator | 2026-04-05 04:10:52.787735 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-05 04:10:52.787742 | orchestrator | 2026-04-05 04:10:52.787748 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-05 04:10:52.787754 | orchestrator | Sunday 05 April 2026 04:10:30 +0000 (0:00:45.154) 0:00:51.387 ********** 2026-04-05 04:10:52.787760 | orchestrator | ok: [testbed-manager] 2026-04-05 04:10:52.787767 | orchestrator | 2026-04-05 04:10:52.787773 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-05 04:10:52.787779 | orchestrator | Sunday 05 April 2026 04:10:32 +0000 (0:00:01.230) 0:00:52.618 ********** 2026-04-05 04:10:52.787799 | orchestrator | ok: [testbed-manager] 2026-04-05 04:10:52.787806 | orchestrator | 2026-04-05 04:10:52.787812 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-05 04:10:52.787818 | orchestrator | Sunday 05 April 2026 04:10:32 +0000 (0:00:00.165) 0:00:52.784 ********** 2026-04-05 04:10:52.787825 | orchestrator | ok: [testbed-manager] 2026-04-05 04:10:52.787831 | orchestrator | 2026-04-05 04:10:52.787837 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-05 04:10:52.787843 | orchestrator | Sunday 05 April 2026 04:10:49 +0000 (0:00:17.154) 0:01:09.938 ********** 2026-04-05 04:10:52.787849 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:10:52.787856 | orchestrator | 2026-04-05 04:10:52.787862 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-05 04:10:52.787868 | orchestrator | Sunday 05 April 2026 04:10:49 +0000 (0:00:00.163) 0:01:10.102 ********** 2026-04-05 04:10:52.787874 | orchestrator | changed: [testbed-manager] 2026-04-05 04:10:52.787880 | orchestrator | 2026-04-05 04:10:52.787886 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:10:52.787893 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 04:10:52.787900 | orchestrator | 2026-04-05 04:10:52.787906 | orchestrator | 2026-04-05 04:10:52.787990 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:10:52.787999 | orchestrator | Sunday 05 April 2026 04:10:52 +0000 (0:00:02.765) 0:01:12.868 ********** 2026-04-05 04:10:52.788005 | orchestrator | =============================================================================== 2026-04-05 04:10:52.788012 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 45.15s 2026-04-05 04:10:52.788036 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.15s 2026-04-05 04:10:52.788042 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.77s 2026-04-05 04:10:52.788049 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.77s 2026-04-05 04:10:52.788055 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.23s 2026-04-05 04:10:52.788061 | orchestrator | Include cert_manager role ----------------------------------------------- 0.26s 2026-04-05 04:10:52.788067 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-04-05 04:10:52.788077 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.16s 2026-04-05 04:10:53.170177 | orchestrator | + osism apply magnum 2026-04-05 04:10:55.542105 | orchestrator | 2026-04-05 04:10:55 | INFO  | Task 2d9494b3-54ef-457b-b119-3f31c437685c (magnum) was prepared for execution. 2026-04-05 04:10:55.542197 | orchestrator | 2026-04-05 04:10:55 | INFO  | It takes a moment until task 2d9494b3-54ef-457b-b119-3f31c437685c (magnum) has been started and output is visible here. 2026-04-05 04:11:40.838229 | orchestrator | 2026-04-05 04:11:40.838381 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:11:40.838406 | orchestrator | 2026-04-05 04:11:40.838422 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:11:40.838439 | orchestrator | Sunday 05 April 2026 04:11:00 +0000 (0:00:00.298) 0:00:00.298 ********** 2026-04-05 04:11:40.839416 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:11:40.839463 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:11:40.839472 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:11:40.839481 | orchestrator | 2026-04-05 04:11:40.839490 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:11:40.839499 | orchestrator | Sunday 05 April 2026 04:11:00 +0000 (0:00:00.356) 0:00:00.654 ********** 2026-04-05 04:11:40.839508 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-05 04:11:40.839516 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-05 04:11:40.839524 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-05 04:11:40.839532 | orchestrator | 2026-04-05 04:11:40.839541 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-05 04:11:40.839549 | orchestrator | 2026-04-05 04:11:40.839557 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 04:11:40.839565 | orchestrator | Sunday 05 April 2026 04:11:01 +0000 (0:00:00.513) 0:00:01.168 ********** 2026-04-05 04:11:40.839573 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:11:40.839582 | orchestrator | 2026-04-05 04:11:40.839590 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-05 04:11:40.839597 | orchestrator | Sunday 05 April 2026 04:11:01 +0000 (0:00:00.650) 0:00:01.819 ********** 2026-04-05 04:11:40.839606 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-05 04:11:40.839614 | orchestrator | 2026-04-05 04:11:40.839622 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-05 04:11:40.839630 | orchestrator | Sunday 05 April 2026 04:11:05 +0000 (0:00:03.731) 0:00:05.550 ********** 2026-04-05 04:11:40.839637 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-05 04:11:40.839646 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-05 04:11:40.839654 | orchestrator | 2026-04-05 04:11:40.839662 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-05 04:11:40.839670 | orchestrator | Sunday 05 April 2026 04:11:12 +0000 (0:00:07.025) 0:00:12.576 ********** 2026-04-05 04:11:40.839678 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 04:11:40.839686 | orchestrator | 2026-04-05 04:11:40.839719 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-05 04:11:40.839741 | orchestrator | Sunday 05 April 2026 04:11:16 +0000 (0:00:03.677) 0:00:16.253 ********** 2026-04-05 04:11:40.839749 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 04:11:40.839757 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-05 04:11:40.839765 | orchestrator | 2026-04-05 04:11:40.839773 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-05 04:11:40.839781 | orchestrator | Sunday 05 April 2026 04:11:20 +0000 (0:00:04.327) 0:00:20.581 ********** 2026-04-05 04:11:40.839789 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 04:11:40.839797 | orchestrator | 2026-04-05 04:11:40.839805 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-05 04:11:40.839853 | orchestrator | Sunday 05 April 2026 04:11:23 +0000 (0:00:03.441) 0:00:24.022 ********** 2026-04-05 04:11:40.839862 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-05 04:11:40.839870 | orchestrator | 2026-04-05 04:11:40.839878 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-05 04:11:40.839886 | orchestrator | Sunday 05 April 2026 04:11:28 +0000 (0:00:04.064) 0:00:28.087 ********** 2026-04-05 04:11:40.839894 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:11:40.839902 | orchestrator | 2026-04-05 04:11:40.839909 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-05 04:11:40.839979 | orchestrator | Sunday 05 April 2026 04:11:31 +0000 (0:00:03.407) 0:00:31.494 ********** 2026-04-05 04:11:40.839988 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:11:40.839996 | orchestrator | 2026-04-05 04:11:40.840004 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-05 04:11:40.840012 | orchestrator | Sunday 05 April 2026 04:11:35 +0000 (0:00:04.028) 0:00:35.523 ********** 2026-04-05 04:11:40.840020 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:11:40.840028 | orchestrator | 2026-04-05 04:11:40.840036 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-05 04:11:40.840044 | orchestrator | Sunday 05 April 2026 04:11:39 +0000 (0:00:03.628) 0:00:39.151 ********** 2026-04-05 04:11:40.840078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:40.840090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:40.840113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:40.840123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:40.840132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:40.840147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:48.882003 | orchestrator | 2026-04-05 04:11:48.882180 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-05 04:11:48.882200 | orchestrator | Sunday 05 April 2026 04:11:40 +0000 (0:00:01.707) 0:00:40.859 ********** 2026-04-05 04:11:48.882213 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:11:48.882226 | orchestrator | 2026-04-05 04:11:48.882238 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-05 04:11:48.882249 | orchestrator | Sunday 05 April 2026 04:11:40 +0000 (0:00:00.170) 0:00:41.030 ********** 2026-04-05 04:11:48.882260 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:11:48.882271 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:11:48.882282 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:11:48.882316 | orchestrator | 2026-04-05 04:11:48.882328 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-05 04:11:48.882339 | orchestrator | Sunday 05 April 2026 04:11:41 +0000 (0:00:00.370) 0:00:41.400 ********** 2026-04-05 04:11:48.882351 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:11:48.882361 | orchestrator | 2026-04-05 04:11:48.882372 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-05 04:11:48.882383 | orchestrator | Sunday 05 April 2026 04:11:42 +0000 (0:00:00.909) 0:00:42.310 ********** 2026-04-05 04:11:48.882396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:48.882425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:48.882437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:48.882470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:48.882493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:48.882505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:48.882517 | orchestrator | 2026-04-05 04:11:48.882534 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-05 04:11:48.882545 | orchestrator | Sunday 05 April 2026 04:11:44 +0000 (0:00:02.556) 0:00:44.866 ********** 2026-04-05 04:11:48.882556 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:11:48.882568 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:11:48.882579 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:11:48.882590 | orchestrator | 2026-04-05 04:11:48.882600 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 04:11:48.882611 | orchestrator | Sunday 05 April 2026 04:11:45 +0000 (0:00:00.667) 0:00:45.533 ********** 2026-04-05 04:11:48.882623 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:11:48.882634 | orchestrator | 2026-04-05 04:11:48.882644 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-05 04:11:48.882655 | orchestrator | Sunday 05 April 2026 04:11:46 +0000 (0:00:00.621) 0:00:46.154 ********** 2026-04-05 04:11:48.882667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:48.882687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:49.840816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:49.840974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:49.840989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:49.840997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:11:49.841005 | orchestrator | 2026-04-05 04:11:49.841014 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-05 04:11:49.841023 | orchestrator | Sunday 05 April 2026 04:11:48 +0000 (0:00:02.758) 0:00:48.913 ********** 2026-04-05 04:11:49.841063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:11:49.841072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:11:49.841080 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:11:49.841094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:11:49.841102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:11:49.841110 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:11:49.841118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:11:49.841137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:11:53.610114 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:11:53.610214 | orchestrator | 2026-04-05 04:11:53.610228 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-05 04:11:53.610239 | orchestrator | Sunday 05 April 2026 04:11:49 +0000 (0:00:00.954) 0:00:49.867 ********** 2026-04-05 04:11:53.610251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:11:53.610279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:11:53.610290 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:11:53.610300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:11:53.610328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:11:53.610337 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:11:53.610363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:11:53.610373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:11:53.610382 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:11:53.610391 | orchestrator | 2026-04-05 04:11:53.610400 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-05 04:11:53.610414 | orchestrator | Sunday 05 April 2026 04:11:50 +0000 (0:00:00.966) 0:00:50.834 ********** 2026-04-05 04:11:53.610424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:53.610439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:11:53.610493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:12:00.288127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:00.288244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:00.288268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:00.288305 | orchestrator | 2026-04-05 04:12:00.288320 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-05 04:12:00.288334 | orchestrator | Sunday 05 April 2026 04:11:53 +0000 (0:00:02.809) 0:00:53.643 ********** 2026-04-05 04:12:00.288346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:12:00.288377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:12:00.288389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:12:00.288408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:00.288420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:00.288441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:00.288452 | orchestrator | 2026-04-05 04:12:00.288464 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-05 04:12:00.288476 | orchestrator | Sunday 05 April 2026 04:11:59 +0000 (0:00:05.851) 0:00:59.495 ********** 2026-04-05 04:12:00.288497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:12:02.431654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:12:02.431758 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:12:02.431796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:12:02.431835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:12:02.431849 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:12:02.431862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-05 04:12:02.431895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:12:02.431908 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:12:02.431964 | orchestrator | 2026-04-05 04:12:02.431973 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-05 04:12:02.431983 | orchestrator | Sunday 05 April 2026 04:12:00 +0000 (0:00:00.822) 0:01:00.318 ********** 2026-04-05 04:12:02.431998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:12:02.432014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:12:02.432023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-05 04:12:02.432030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:02.432064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:56.207766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 04:12:56.207880 | orchestrator | 2026-04-05 04:12:56.207894 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 04:12:56.207904 | orchestrator | Sunday 05 April 2026 04:12:02 +0000 (0:00:02.137) 0:01:02.455 ********** 2026-04-05 04:12:56.207984 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:12:56.207998 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:12:56.208006 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:12:56.208014 | orchestrator | 2026-04-05 04:12:56.208022 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-05 04:12:56.208030 | orchestrator | Sunday 05 April 2026 04:12:03 +0000 (0:00:00.609) 0:01:03.064 ********** 2026-04-05 04:12:56.208038 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:12:56.208046 | orchestrator | 2026-04-05 04:12:56.208054 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-05 04:12:56.208062 | orchestrator | Sunday 05 April 2026 04:12:05 +0000 (0:00:02.293) 0:01:05.357 ********** 2026-04-05 04:12:56.208070 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:12:56.208078 | orchestrator | 2026-04-05 04:12:56.208086 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-05 04:12:56.208094 | orchestrator | Sunday 05 April 2026 04:12:07 +0000 (0:00:02.510) 0:01:07.868 ********** 2026-04-05 04:12:56.208102 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:12:56.208109 | orchestrator | 2026-04-05 04:12:56.208117 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 04:12:56.208127 | orchestrator | Sunday 05 April 2026 04:12:24 +0000 (0:00:16.774) 0:01:24.642 ********** 2026-04-05 04:12:56.208141 | orchestrator | 2026-04-05 04:12:56.208154 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 04:12:56.208167 | orchestrator | Sunday 05 April 2026 04:12:24 +0000 (0:00:00.084) 0:01:24.726 ********** 2026-04-05 04:12:56.208193 | orchestrator | 2026-04-05 04:12:56.208217 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 04:12:56.208230 | orchestrator | Sunday 05 April 2026 04:12:24 +0000 (0:00:00.079) 0:01:24.806 ********** 2026-04-05 04:12:56.208242 | orchestrator | 2026-04-05 04:12:56.208254 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-05 04:12:56.208267 | orchestrator | Sunday 05 April 2026 04:12:24 +0000 (0:00:00.075) 0:01:24.881 ********** 2026-04-05 04:12:56.208279 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:12:56.208292 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:12:56.208305 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:12:56.208318 | orchestrator | 2026-04-05 04:12:56.208331 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-05 04:12:56.208345 | orchestrator | Sunday 05 April 2026 04:12:39 +0000 (0:00:15.005) 0:01:39.887 ********** 2026-04-05 04:12:56.208357 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:12:56.208371 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:12:56.208383 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:12:56.208396 | orchestrator | 2026-04-05 04:12:56.208409 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:12:56.208423 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:12:56.208438 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 04:12:56.208452 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 04:12:56.208487 | orchestrator | 2026-04-05 04:12:56.208501 | orchestrator | 2026-04-05 04:12:56.208515 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:12:56.208542 | orchestrator | Sunday 05 April 2026 04:12:55 +0000 (0:00:15.812) 0:01:55.699 ********** 2026-04-05 04:12:56.208555 | orchestrator | =============================================================================== 2026-04-05 04:12:56.208563 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.77s 2026-04-05 04:12:56.208571 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.81s 2026-04-05 04:12:56.208579 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.01s 2026-04-05 04:12:56.208588 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.03s 2026-04-05 04:12:56.208596 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.85s 2026-04-05 04:12:56.208604 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.33s 2026-04-05 04:12:56.208612 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.06s 2026-04-05 04:12:56.208639 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.03s 2026-04-05 04:12:56.208647 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.73s 2026-04-05 04:12:56.208655 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.68s 2026-04-05 04:12:56.208663 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.63s 2026-04-05 04:12:56.208671 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.44s 2026-04-05 04:12:56.208678 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.41s 2026-04-05 04:12:56.208686 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.81s 2026-04-05 04:12:56.208694 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.76s 2026-04-05 04:12:56.208710 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.56s 2026-04-05 04:12:56.208719 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.51s 2026-04-05 04:12:56.208726 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.29s 2026-04-05 04:12:56.208734 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.14s 2026-04-05 04:12:56.208742 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.71s 2026-04-05 04:12:56.960801 | orchestrator | ok: Runtime: 1:47:58.505777 2026-04-05 04:12:57.204593 | 2026-04-05 04:12:57.204736 | TASK [Deploy in a nutshell] 2026-04-05 04:12:57.740481 | orchestrator | skipping: Conditional result was False 2026-04-05 04:12:57.763794 | 2026-04-05 04:12:57.763942 | TASK [Bootstrap services] 2026-04-05 04:12:58.490166 | orchestrator | 2026-04-05 04:12:58.490304 | orchestrator | # BOOTSTRAP 2026-04-05 04:12:58.490316 | orchestrator | 2026-04-05 04:12:58.490323 | orchestrator | + set -e 2026-04-05 04:12:58.490329 | orchestrator | + echo 2026-04-05 04:12:58.490336 | orchestrator | + echo '# BOOTSTRAP' 2026-04-05 04:12:58.490346 | orchestrator | + echo 2026-04-05 04:12:58.490371 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-05 04:12:58.498097 | orchestrator | + set -e 2026-04-05 04:12:58.498189 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-05 04:13:01.026213 | orchestrator | 2026-04-05 04:13:01 | INFO  | It takes a moment until task 981ccfa0-95a2-4320-a730-34e0f099dda8 (flavor-manager) has been started and output is visible here. 2026-04-05 04:13:10.558273 | orchestrator | 2026-04-05 04:13:05 | INFO  | Flavor SCS-1L-1 created 2026-04-05 04:13:10.558383 | orchestrator | 2026-04-05 04:13:05 | INFO  | Flavor SCS-1L-1-5 created 2026-04-05 04:13:10.558398 | orchestrator | 2026-04-05 04:13:05 | INFO  | Flavor SCS-1V-2 created 2026-04-05 04:13:10.558411 | orchestrator | 2026-04-05 04:13:06 | INFO  | Flavor SCS-1V-2-5 created 2026-04-05 04:13:10.558424 | orchestrator | 2026-04-05 04:13:06 | INFO  | Flavor SCS-1V-4 created 2026-04-05 04:13:10.558436 | orchestrator | 2026-04-05 04:13:06 | INFO  | Flavor SCS-1V-4-10 created 2026-04-05 04:13:10.558449 | orchestrator | 2026-04-05 04:13:06 | INFO  | Flavor SCS-1V-8 created 2026-04-05 04:13:10.558462 | orchestrator | 2026-04-05 04:13:06 | INFO  | Flavor SCS-1V-8-20 created 2026-04-05 04:13:10.558487 | orchestrator | 2026-04-05 04:13:07 | INFO  | Flavor SCS-2V-4 created 2026-04-05 04:13:10.558502 | orchestrator | 2026-04-05 04:13:07 | INFO  | Flavor SCS-2V-4-10 created 2026-04-05 04:13:10.558514 | orchestrator | 2026-04-05 04:13:07 | INFO  | Flavor SCS-2V-8 created 2026-04-05 04:13:10.558527 | orchestrator | 2026-04-05 04:13:07 | INFO  | Flavor SCS-2V-8-20 created 2026-04-05 04:13:10.558539 | orchestrator | 2026-04-05 04:13:07 | INFO  | Flavor SCS-2V-16 created 2026-04-05 04:13:10.558553 | orchestrator | 2026-04-05 04:13:07 | INFO  | Flavor SCS-2V-16-50 created 2026-04-05 04:13:10.558565 | orchestrator | 2026-04-05 04:13:07 | INFO  | Flavor SCS-4V-8 created 2026-04-05 04:13:10.558577 | orchestrator | 2026-04-05 04:13:08 | INFO  | Flavor SCS-4V-8-20 created 2026-04-05 04:13:10.558590 | orchestrator | 2026-04-05 04:13:08 | INFO  | Flavor SCS-4V-16 created 2026-04-05 04:13:10.558603 | orchestrator | 2026-04-05 04:13:08 | INFO  | Flavor SCS-4V-16-50 created 2026-04-05 04:13:10.558616 | orchestrator | 2026-04-05 04:13:08 | INFO  | Flavor SCS-4V-32 created 2026-04-05 04:13:10.558625 | orchestrator | 2026-04-05 04:13:08 | INFO  | Flavor SCS-4V-32-100 created 2026-04-05 04:13:10.558632 | orchestrator | 2026-04-05 04:13:08 | INFO  | Flavor SCS-8V-16 created 2026-04-05 04:13:10.558640 | orchestrator | 2026-04-05 04:13:09 | INFO  | Flavor SCS-8V-16-50 created 2026-04-05 04:13:10.558647 | orchestrator | 2026-04-05 04:13:09 | INFO  | Flavor SCS-8V-32 created 2026-04-05 04:13:10.558655 | orchestrator | 2026-04-05 04:13:09 | INFO  | Flavor SCS-8V-32-100 created 2026-04-05 04:13:10.558662 | orchestrator | 2026-04-05 04:13:09 | INFO  | Flavor SCS-16V-32 created 2026-04-05 04:13:10.558670 | orchestrator | 2026-04-05 04:13:09 | INFO  | Flavor SCS-16V-32-100 created 2026-04-05 04:13:10.558677 | orchestrator | 2026-04-05 04:13:09 | INFO  | Flavor SCS-2V-4-20s created 2026-04-05 04:13:10.558684 | orchestrator | 2026-04-05 04:13:10 | INFO  | Flavor SCS-4V-8-50s created 2026-04-05 04:13:10.558691 | orchestrator | 2026-04-05 04:13:10 | INFO  | Flavor SCS-8V-32-100s created 2026-04-05 04:13:13.473620 | orchestrator | 2026-04-05 04:13:13 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-05 04:13:13.570781 | orchestrator | 2026-04-05 04:13:13 | INFO  | Task 6ea5d555-626b-455e-b235-a9f25b8dfafc (bootstrap-basic) was prepared for execution. 2026-04-05 04:13:13.570862 | orchestrator | 2026-04-05 04:13:13 | INFO  | It takes a moment until task 6ea5d555-626b-455e-b235-a9f25b8dfafc (bootstrap-basic) has been started and output is visible here. 2026-04-05 04:14:03.590373 | orchestrator | 2026-04-05 04:14:03.590481 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-05 04:14:03.590496 | orchestrator | 2026-04-05 04:14:03.590506 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 04:14:03.590516 | orchestrator | Sunday 05 April 2026 04:13:18 +0000 (0:00:00.085) 0:00:00.085 ********** 2026-04-05 04:14:03.590526 | orchestrator | ok: [localhost] 2026-04-05 04:14:03.590537 | orchestrator | 2026-04-05 04:14:03.590546 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-05 04:14:03.590555 | orchestrator | Sunday 05 April 2026 04:13:21 +0000 (0:00:02.358) 0:00:02.444 ********** 2026-04-05 04:14:03.590563 | orchestrator | ok: [localhost] 2026-04-05 04:14:03.590572 | orchestrator | 2026-04-05 04:14:03.590583 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-05 04:14:03.590598 | orchestrator | Sunday 05 April 2026 04:13:29 +0000 (0:00:08.383) 0:00:10.828 ********** 2026-04-05 04:14:03.590613 | orchestrator | changed: [localhost] 2026-04-05 04:14:03.590627 | orchestrator | 2026-04-05 04:14:03.590641 | orchestrator | TASK [Create public network] *************************************************** 2026-04-05 04:14:03.590656 | orchestrator | Sunday 05 April 2026 04:13:36 +0000 (0:00:07.334) 0:00:18.163 ********** 2026-04-05 04:14:03.590670 | orchestrator | changed: [localhost] 2026-04-05 04:14:03.590683 | orchestrator | 2026-04-05 04:14:03.590697 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-05 04:14:03.590711 | orchestrator | Sunday 05 April 2026 04:13:42 +0000 (0:00:05.720) 0:00:23.883 ********** 2026-04-05 04:14:03.590730 | orchestrator | changed: [localhost] 2026-04-05 04:14:03.590745 | orchestrator | 2026-04-05 04:14:03.590760 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-05 04:14:03.590774 | orchestrator | Sunday 05 April 2026 04:13:49 +0000 (0:00:07.122) 0:00:31.006 ********** 2026-04-05 04:14:03.590790 | orchestrator | changed: [localhost] 2026-04-05 04:14:03.590805 | orchestrator | 2026-04-05 04:14:03.590821 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-05 04:14:03.590835 | orchestrator | Sunday 05 April 2026 04:13:54 +0000 (0:00:05.102) 0:00:36.109 ********** 2026-04-05 04:14:03.590850 | orchestrator | changed: [localhost] 2026-04-05 04:14:03.590864 | orchestrator | 2026-04-05 04:14:03.590879 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-05 04:14:03.590907 | orchestrator | Sunday 05 April 2026 04:13:59 +0000 (0:00:04.489) 0:00:40.598 ********** 2026-04-05 04:14:03.590951 | orchestrator | ok: [localhost] 2026-04-05 04:14:03.590977 | orchestrator | 2026-04-05 04:14:03.590991 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:14:03.591007 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:14:03.591023 | orchestrator | 2026-04-05 04:14:03.591038 | orchestrator | 2026-04-05 04:14:03.591052 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:14:03.591067 | orchestrator | Sunday 05 April 2026 04:14:03 +0000 (0:00:03.998) 0:00:44.596 ********** 2026-04-05 04:14:03.591083 | orchestrator | =============================================================================== 2026-04-05 04:14:03.591097 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.38s 2026-04-05 04:14:03.591112 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.33s 2026-04-05 04:14:03.591126 | orchestrator | Set public network to default ------------------------------------------- 7.12s 2026-04-05 04:14:03.591141 | orchestrator | Create public network --------------------------------------------------- 5.72s 2026-04-05 04:14:03.591184 | orchestrator | Create public subnet ---------------------------------------------------- 5.10s 2026-04-05 04:14:03.591201 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.49s 2026-04-05 04:14:03.591216 | orchestrator | Create manager role ----------------------------------------------------- 4.00s 2026-04-05 04:14:03.591231 | orchestrator | Gathering Facts --------------------------------------------------------- 2.36s 2026-04-05 04:14:06.439100 | orchestrator | 2026-04-05 04:14:06 | INFO  | It takes a moment until task 78f86ccd-f8db-4c8c-bdd4-3b3bab5f2dde (image-manager) has been started and output is visible here. 2026-04-05 04:14:51.601375 | orchestrator | 2026-04-05 04:14:09 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-05 04:14:51.601519 | orchestrator | 2026-04-05 04:14:09 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-05 04:14:51.601551 | orchestrator | 2026-04-05 04:14:09 | INFO  | Importing image Cirros 0.6.2 2026-04-05 04:14:51.601573 | orchestrator | 2026-04-05 04:14:09 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-05 04:14:51.601595 | orchestrator | 2026-04-05 04:14:11 | INFO  | Waiting for image to leave queued state... 2026-04-05 04:14:51.601616 | orchestrator | 2026-04-05 04:14:13 | INFO  | Waiting for import to complete... 2026-04-05 04:14:51.601634 | orchestrator | 2026-04-05 04:14:23 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-05 04:14:51.601653 | orchestrator | 2026-04-05 04:14:24 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-05 04:14:51.601673 | orchestrator | 2026-04-05 04:14:24 | INFO  | Setting internal_version = 0.6.2 2026-04-05 04:14:51.601693 | orchestrator | 2026-04-05 04:14:24 | INFO  | Setting image_original_user = cirros 2026-04-05 04:14:51.601715 | orchestrator | 2026-04-05 04:14:24 | INFO  | Adding tag os:cirros 2026-04-05 04:14:51.601734 | orchestrator | 2026-04-05 04:14:24 | INFO  | Setting property architecture: x86_64 2026-04-05 04:14:51.601753 | orchestrator | 2026-04-05 04:14:24 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 04:14:51.601774 | orchestrator | 2026-04-05 04:14:25 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 04:14:51.601795 | orchestrator | 2026-04-05 04:14:25 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 04:14:51.601817 | orchestrator | 2026-04-05 04:14:25 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 04:14:51.601839 | orchestrator | 2026-04-05 04:14:25 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 04:14:51.601860 | orchestrator | 2026-04-05 04:14:26 | INFO  | Setting property os_distro: cirros 2026-04-05 04:14:51.601881 | orchestrator | 2026-04-05 04:14:26 | INFO  | Setting property os_purpose: minimal 2026-04-05 04:14:51.601901 | orchestrator | 2026-04-05 04:14:26 | INFO  | Setting property replace_frequency: never 2026-04-05 04:14:51.601953 | orchestrator | 2026-04-05 04:14:26 | INFO  | Setting property uuid_validity: none 2026-04-05 04:14:51.601973 | orchestrator | 2026-04-05 04:14:27 | INFO  | Setting property provided_until: none 2026-04-05 04:14:51.601993 | orchestrator | 2026-04-05 04:14:27 | INFO  | Setting property image_description: Cirros 2026-04-05 04:14:51.602013 | orchestrator | 2026-04-05 04:14:27 | INFO  | Setting property image_name: Cirros 2026-04-05 04:14:51.602112 | orchestrator | 2026-04-05 04:14:28 | INFO  | Setting property internal_version: 0.6.2 2026-04-05 04:14:51.602134 | orchestrator | 2026-04-05 04:14:28 | INFO  | Setting property image_original_user: cirros 2026-04-05 04:14:51.602188 | orchestrator | 2026-04-05 04:14:28 | INFO  | Setting property os_version: 0.6.2 2026-04-05 04:14:51.602222 | orchestrator | 2026-04-05 04:14:28 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-05 04:14:51.602243 | orchestrator | 2026-04-05 04:14:29 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-05 04:14:51.602261 | orchestrator | 2026-04-05 04:14:29 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-05 04:14:51.602280 | orchestrator | 2026-04-05 04:14:29 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-05 04:14:51.602299 | orchestrator | 2026-04-05 04:14:29 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-05 04:14:51.602318 | orchestrator | 2026-04-05 04:14:30 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-05 04:14:51.602344 | orchestrator | 2026-04-05 04:14:30 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-05 04:14:51.602365 | orchestrator | 2026-04-05 04:14:30 | INFO  | Importing image Cirros 0.6.3 2026-04-05 04:14:51.602385 | orchestrator | 2026-04-05 04:14:30 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-05 04:14:51.602406 | orchestrator | 2026-04-05 04:14:32 | INFO  | Waiting for image to leave queued state... 2026-04-05 04:14:51.602425 | orchestrator | 2026-04-05 04:14:34 | INFO  | Waiting for import to complete... 2026-04-05 04:14:51.602476 | orchestrator | 2026-04-05 04:14:44 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-05 04:14:51.602499 | orchestrator | 2026-04-05 04:14:45 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-05 04:14:51.602519 | orchestrator | 2026-04-05 04:14:45 | INFO  | Setting internal_version = 0.6.3 2026-04-05 04:14:51.602539 | orchestrator | 2026-04-05 04:14:45 | INFO  | Setting image_original_user = cirros 2026-04-05 04:14:51.602558 | orchestrator | 2026-04-05 04:14:45 | INFO  | Adding tag os:cirros 2026-04-05 04:14:51.602578 | orchestrator | 2026-04-05 04:14:45 | INFO  | Setting property architecture: x86_64 2026-04-05 04:14:51.602598 | orchestrator | 2026-04-05 04:14:45 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 04:14:51.602613 | orchestrator | 2026-04-05 04:14:46 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 04:14:51.602624 | orchestrator | 2026-04-05 04:14:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 04:14:51.602635 | orchestrator | 2026-04-05 04:14:46 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 04:14:51.602646 | orchestrator | 2026-04-05 04:14:46 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 04:14:51.602657 | orchestrator | 2026-04-05 04:14:47 | INFO  | Setting property os_distro: cirros 2026-04-05 04:14:51.602668 | orchestrator | 2026-04-05 04:14:47 | INFO  | Setting property os_purpose: minimal 2026-04-05 04:14:51.602678 | orchestrator | 2026-04-05 04:14:47 | INFO  | Setting property replace_frequency: never 2026-04-05 04:14:51.602690 | orchestrator | 2026-04-05 04:14:47 | INFO  | Setting property uuid_validity: none 2026-04-05 04:14:51.602701 | orchestrator | 2026-04-05 04:14:48 | INFO  | Setting property provided_until: none 2026-04-05 04:14:51.602711 | orchestrator | 2026-04-05 04:14:48 | INFO  | Setting property image_description: Cirros 2026-04-05 04:14:51.602722 | orchestrator | 2026-04-05 04:14:48 | INFO  | Setting property image_name: Cirros 2026-04-05 04:14:51.602733 | orchestrator | 2026-04-05 04:14:48 | INFO  | Setting property internal_version: 0.6.3 2026-04-05 04:14:51.602777 | orchestrator | 2026-04-05 04:14:49 | INFO  | Setting property image_original_user: cirros 2026-04-05 04:14:51.602789 | orchestrator | 2026-04-05 04:14:49 | INFO  | Setting property os_version: 0.6.3 2026-04-05 04:14:51.602799 | orchestrator | 2026-04-05 04:14:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-05 04:14:51.602810 | orchestrator | 2026-04-05 04:14:49 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-05 04:14:51.602821 | orchestrator | 2026-04-05 04:14:50 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-05 04:14:51.602832 | orchestrator | 2026-04-05 04:14:50 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-05 04:14:51.602842 | orchestrator | 2026-04-05 04:14:50 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-05 04:14:52.001592 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-05 04:14:54.392844 | orchestrator | 2026-04-05 04:14:54 | INFO  | date: 2026-04-05 2026-04-05 04:14:54.392976 | orchestrator | 2026-04-05 04:14:54 | INFO  | image: octavia-amphora-haproxy-2024.2.20260405.qcow2 2026-04-05 04:14:54.393018 | orchestrator | 2026-04-05 04:14:54 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260405.qcow2 2026-04-05 04:14:54.393033 | orchestrator | 2026-04-05 04:14:54 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260405.qcow2.CHECKSUM 2026-04-05 04:14:54.500279 | orchestrator | 2026-04-05 04:14:54 | INFO  | checksum: a096f1c9657d28508c7a70efec2bf3f9411b30bf35436da58b796042bc73acf8 2026-04-05 04:14:54.588055 | orchestrator | 2026-04-05 04:14:54 | INFO  | It takes a moment until task b7b1facf-aaba-448e-b0c0-33a2f1875853 (image-manager) has been started and output is visible here. 2026-04-05 04:16:08.183980 | orchestrator | 2026-04-05 04:14:57 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-05' 2026-04-05 04:16:08.184077 | orchestrator | 2026-04-05 04:14:57 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260405.qcow2: 200 2026-04-05 04:16:08.184092 | orchestrator | 2026-04-05 04:14:57 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-05 2026-04-05 04:16:08.184114 | orchestrator | 2026-04-05 04:14:57 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260405.qcow2 2026-04-05 04:16:08.184125 | orchestrator | 2026-04-05 04:14:58 | INFO  | Waiting for image to leave queued state... 2026-04-05 04:16:08.184135 | orchestrator | 2026-04-05 04:15:00 | INFO  | Waiting for import to complete... 2026-04-05 04:16:08.184150 | orchestrator | 2026-04-05 04:15:10 | INFO  | Waiting for import to complete... 2026-04-05 04:16:08.184164 | orchestrator | 2026-04-05 04:15:20 | INFO  | Waiting for import to complete... 2026-04-05 04:16:08.184178 | orchestrator | 2026-04-05 04:15:31 | INFO  | Waiting for import to complete... 2026-04-05 04:16:08.184195 | orchestrator | 2026-04-05 04:15:41 | INFO  | Waiting for import to complete... 2026-04-05 04:16:08.184210 | orchestrator | 2026-04-05 04:15:51 | INFO  | Waiting for import to complete... 2026-04-05 04:16:08.184225 | orchestrator | 2026-04-05 04:16:01 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-05' successfully completed, reloading images 2026-04-05 04:16:08.184238 | orchestrator | 2026-04-05 04:16:02 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-05' 2026-04-05 04:16:08.184282 | orchestrator | 2026-04-05 04:16:02 | INFO  | Setting internal_version = 2026-04-05 2026-04-05 04:16:08.184299 | orchestrator | 2026-04-05 04:16:02 | INFO  | Setting image_original_user = ubuntu 2026-04-05 04:16:08.184315 | orchestrator | 2026-04-05 04:16:02 | INFO  | Adding tag amphora 2026-04-05 04:16:08.184330 | orchestrator | 2026-04-05 04:16:02 | INFO  | Adding tag os:ubuntu 2026-04-05 04:16:08.184345 | orchestrator | 2026-04-05 04:16:02 | INFO  | Setting property architecture: x86_64 2026-04-05 04:16:08.184360 | orchestrator | 2026-04-05 04:16:03 | INFO  | Setting property hw_disk_bus: scsi 2026-04-05 04:16:08.184375 | orchestrator | 2026-04-05 04:16:03 | INFO  | Setting property hw_rng_model: virtio 2026-04-05 04:16:08.184387 | orchestrator | 2026-04-05 04:16:03 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-05 04:16:08.184396 | orchestrator | 2026-04-05 04:16:03 | INFO  | Setting property hw_watchdog_action: reset 2026-04-05 04:16:08.184405 | orchestrator | 2026-04-05 04:16:04 | INFO  | Setting property hypervisor_type: qemu 2026-04-05 04:16:08.184414 | orchestrator | 2026-04-05 04:16:04 | INFO  | Setting property os_distro: ubuntu 2026-04-05 04:16:08.184422 | orchestrator | 2026-04-05 04:16:04 | INFO  | Setting property replace_frequency: quarterly 2026-04-05 04:16:08.184431 | orchestrator | 2026-04-05 04:16:04 | INFO  | Setting property uuid_validity: last-1 2026-04-05 04:16:08.184439 | orchestrator | 2026-04-05 04:16:05 | INFO  | Setting property provided_until: none 2026-04-05 04:16:08.184448 | orchestrator | 2026-04-05 04:16:05 | INFO  | Setting property os_purpose: network 2026-04-05 04:16:08.184471 | orchestrator | 2026-04-05 04:16:05 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-05 04:16:08.184480 | orchestrator | 2026-04-05 04:16:05 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-05 04:16:08.184489 | orchestrator | 2026-04-05 04:16:06 | INFO  | Setting property internal_version: 2026-04-05 2026-04-05 04:16:08.184499 | orchestrator | 2026-04-05 04:16:06 | INFO  | Setting property image_original_user: ubuntu 2026-04-05 04:16:08.184510 | orchestrator | 2026-04-05 04:16:06 | INFO  | Setting property os_version: 2026-04-05 2026-04-05 04:16:08.184520 | orchestrator | 2026-04-05 04:16:07 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260405.qcow2 2026-04-05 04:16:08.184531 | orchestrator | 2026-04-05 04:16:07 | INFO  | Setting property image_build_date: 2026-04-05 2026-04-05 04:16:08.184541 | orchestrator | 2026-04-05 04:16:07 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-05' 2026-04-05 04:16:08.184552 | orchestrator | 2026-04-05 04:16:07 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-05' 2026-04-05 04:16:08.184580 | orchestrator | 2026-04-05 04:16:07 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-05 04:16:08.184590 | orchestrator | 2026-04-05 04:16:07 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-05 04:16:08.184601 | orchestrator | 2026-04-05 04:16:07 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-05 04:16:08.184612 | orchestrator | 2026-04-05 04:16:07 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-05 04:16:08.944499 | orchestrator | ok: Runtime: 0:03:10.456192 2026-04-05 04:16:08.963681 | 2026-04-05 04:16:08.963834 | TASK [Run checks] 2026-04-05 04:16:09.739453 | orchestrator | + set -e 2026-04-05 04:16:09.739699 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:16:09.739744 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:16:09.739782 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:16:09.739805 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:16:09.739826 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:16:09.739849 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 04:16:09.741010 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:16:09.749052 | orchestrator | 2026-04-05 04:16:09.749133 | orchestrator | # CHECK 2026-04-05 04:16:09.749147 | orchestrator | 2026-04-05 04:16:09.749159 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:16:09.749177 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:16:09.749188 | orchestrator | + echo 2026-04-05 04:16:09.749200 | orchestrator | + echo '# CHECK' 2026-04-05 04:16:09.749211 | orchestrator | + echo 2026-04-05 04:16:09.749226 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 04:16:09.750243 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-05 04:16:09.828561 | orchestrator | 2026-04-05 04:16:09.828684 | orchestrator | ## Containers @ testbed-manager 2026-04-05 04:16:09.828703 | orchestrator | 2026-04-05 04:16:09.828718 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 04:16:09.828731 | orchestrator | + echo 2026-04-05 04:16:09.828742 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-05 04:16:09.828754 | orchestrator | + echo 2026-04-05 04:16:09.828766 | orchestrator | + osism container testbed-manager ps 2026-04-05 04:16:12.029440 | orchestrator | 2026-04-05 04:16:12 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-05 04:16:12.455742 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 04:16:12.455869 | orchestrator | 709afa880d22 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-04-05 04:16:12.455894 | orchestrator | 99eba2f57d31 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_alertmanager 2026-04-05 04:16:12.455949 | orchestrator | 374eca326118 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-05 04:16:12.455961 | orchestrator | 5dc1623ac8aa registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-05 04:16:12.455971 | orchestrator | 2f7bc56a8eee registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-04-05 04:16:12.455986 | orchestrator | d5f6fc4283fb registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up About an hour cephclient 2026-04-05 04:16:12.455997 | orchestrator | a494550ed4e1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-05 04:16:12.456004 | orchestrator | 3f0d4a28fb01 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-05 04:16:12.456034 | orchestrator | 24c9e70bf436 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-05 04:16:12.456041 | orchestrator | 2cf84ab92981 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-05 04:16:12.456047 | orchestrator | 49e62b023122 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-04-05 04:16:12.456054 | orchestrator | 2d4360393121 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-04-05 04:16:12.456060 | orchestrator | 6e62d5fc96c4 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-04-05 04:16:12.456067 | orchestrator | 8aa92a85ce00 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-05 04:16:12.456092 | orchestrator | 80cafa709cfa registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-04-05 04:16:12.456099 | orchestrator | 06be0547dc72 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-04-05 04:16:12.456106 | orchestrator | 5d463df4826e registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-04-05 04:16:12.456112 | orchestrator | edd425080adf registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-04-05 04:16:12.456119 | orchestrator | 840ecd1cb87b registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-04-05 04:16:12.456125 | orchestrator | 3ecc77353e81 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-05 04:16:12.456131 | orchestrator | 40d4b8def25a registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-04-05 04:16:12.456137 | orchestrator | e4f10bd62ce2 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-05 04:16:12.456149 | orchestrator | 835731093bcf registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-04-05 04:16:12.456155 | orchestrator | 11edf3be9d74 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-04-05 04:16:12.456161 | orchestrator | 0eb328ae91aa registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-04-05 04:16:12.456168 | orchestrator | 8366c9227199 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-05 04:16:12.456174 | orchestrator | f00f531a1a6c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-05 04:16:12.456180 | orchestrator | 40c0614e15fd registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-04-05 04:16:12.456189 | orchestrator | fe66de45f7fa registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-04-05 04:16:12.456196 | orchestrator | 133966d7c834 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-05 04:16:12.821481 | orchestrator | 2026-04-05 04:16:12.821614 | orchestrator | ## Images @ testbed-manager 2026-04-05 04:16:12.821636 | orchestrator | 2026-04-05 04:16:12.821653 | orchestrator | + echo 2026-04-05 04:16:12.821671 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-05 04:16:12.821685 | orchestrator | + echo 2026-04-05 04:16:12.821698 | orchestrator | + osism container testbed-manager images 2026-04-05 04:16:15.392594 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 04:16:15.392714 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 2fd96e7e9166 24 hours ago 239MB 2026-04-05 04:16:15.392732 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-05 04:16:15.392743 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-05 04:16:15.392754 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-05 04:16:15.392768 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 04:16:15.392776 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 04:16:15.392783 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 04:16:15.392790 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-05 04:16:15.392796 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 04:16:15.392826 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-05 04:16:15.392833 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-05 04:16:15.392840 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 04:16:15.392846 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-05 04:16:15.392852 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-05 04:16:15.392858 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-05 04:16:15.392865 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-05 04:16:15.392871 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-05 04:16:15.392877 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-05 04:16:15.392883 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-05 04:16:15.392889 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-05 04:16:15.392896 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-05 04:16:15.392902 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-05 04:16:15.392961 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-05 04:16:15.392968 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-05 04:16:15.392974 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-05 04:16:15.767605 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 04:16:15.768506 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-05 04:16:15.831687 | orchestrator | 2026-04-05 04:16:15.831794 | orchestrator | ## Containers @ testbed-node-0 2026-04-05 04:16:15.831811 | orchestrator | 2026-04-05 04:16:15.831822 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 04:16:15.831833 | orchestrator | + echo 2026-04-05 04:16:15.831844 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-05 04:16:15.831856 | orchestrator | + echo 2026-04-05 04:16:15.831864 | orchestrator | + osism container testbed-node-0 ps 2026-04-05 04:16:18.450842 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 04:16:18.450947 | orchestrator | bd2fdccae974 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-05 04:16:18.450955 | orchestrator | 614a6b6b86db registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-05 04:16:18.450960 | orchestrator | 49b9d18d1e91 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-05 04:16:18.450964 | orchestrator | 51ed2151540a registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-05 04:16:18.450986 | orchestrator | a41d802f6c9a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-05 04:16:18.450990 | orchestrator | 75898e589d5d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-05 04:16:18.450999 | orchestrator | a81f2cc427de registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-05 04:16:18.451003 | orchestrator | 8b9912f31286 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-05 04:16:18.451007 | orchestrator | 3fb333461e07 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-04-05 04:16:18.451011 | orchestrator | e6c77e045ef8 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-05 04:16:18.451015 | orchestrator | 0555e1dfb61b registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-05 04:16:18.451019 | orchestrator | 33b6e3043173 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-05 04:16:18.451022 | orchestrator | ca8c3230b6f1 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-05 04:16:18.451026 | orchestrator | 9ae847577d67 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-05 04:16:18.451030 | orchestrator | 58cdb42522d5 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-04-05 04:16:18.451034 | orchestrator | 189488462d9e registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-04-05 04:16:18.451041 | orchestrator | 8a806509ad78 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes ceilometer_central 2026-04-05 04:16:18.451045 | orchestrator | 54e571fb01bf registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) ceilometer_notification 2026-04-05 04:16:18.451048 | orchestrator | d8b126dba4e2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-05 04:16:18.451065 | orchestrator | 513dace04bbe registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-05 04:16:18.451069 | orchestrator | 7a4be53b7639 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-05 04:16:18.451073 | orchestrator | 347f2817a734 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes octavia_driver_agent 2026-04-05 04:16:18.451080 | orchestrator | 691d6bb24148 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-05 04:16:18.451084 | orchestrator | 0778a127a7e2 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-05 04:16:18.451088 | orchestrator | 8f8dc0ff7368 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_mdns 2026-04-05 04:16:18.451094 | orchestrator | 38d8774ed76a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_producer 2026-04-05 04:16:18.451098 | orchestrator | 09ab64223d13 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-05 04:16:18.451102 | orchestrator | 51e7ca889930 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-05 04:16:18.451106 | orchestrator | f758b9206867 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-05 04:16:18.451110 | orchestrator | 88b22943ea4a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-05 04:16:18.451114 | orchestrator | f47d4c384538 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-05 04:16:18.451118 | orchestrator | be05195f6a50 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) barbican_api 2026-04-05 04:16:18.451121 | orchestrator | 80d235815885 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-05 04:16:18.451125 | orchestrator | f2b11910e5e8 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-05 04:16:18.451129 | orchestrator | 2793f1af4550 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-05 04:16:18.451133 | orchestrator | da54ae634eab registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-05 04:16:18.451137 | orchestrator | a817255a8ffb registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-05 04:16:18.451141 | orchestrator | 92c535183251 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-05 04:16:18.451147 | orchestrator | 5e913e829cb4 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-05 04:16:18.451160 | orchestrator | f14782cb9cc8 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) horizon 2026-04-05 04:16:18.451167 | orchestrator | 390386464bb2 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-05 04:16:18.451173 | orchestrator | eb30b33081e7 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-05 04:16:18.451179 | orchestrator | edae16923d58 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_api 2026-04-05 04:16:18.451185 | orchestrator | f60194ad9bbc registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-05 04:16:18.451193 | orchestrator | a525875a1579 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) neutron_server 2026-04-05 04:16:18.451202 | orchestrator | 12454ed032ed registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) placement_api 2026-04-05 04:16:18.451207 | orchestrator | 43bb788f2917 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone 2026-04-05 04:16:18.451213 | orchestrator | 36d466abbfb7 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_fernet 2026-04-05 04:16:18.451219 | orchestrator | 57e3de22b85e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_ssh 2026-04-05 04:16:18.451225 | orchestrator | 1c7661d3e93d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" About an hour ago Up About an hour ceph-mgr-testbed-node-0 2026-04-05 04:16:18.451231 | orchestrator | 62c9e12531bb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-05 04:16:18.451241 | orchestrator | b58ad7ef29db registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-04-05 04:16:18.451247 | orchestrator | 075d2940b4fb registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-05 04:16:18.451253 | orchestrator | 3dbb559f0614 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-05 04:16:18.452028 | orchestrator | 374c64770a42 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-05 04:16:18.452103 | orchestrator | b1918158c98e registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-05 04:16:18.452118 | orchestrator | 2fcbe89a63f4 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-05 04:16:18.452148 | orchestrator | 31ed438dacf2 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-05 04:16:18.452159 | orchestrator | 3a06d49d8efd registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-05 04:16:18.452168 | orchestrator | 7cfc799c0daf registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-05 04:16:18.452174 | orchestrator | bfc0b49c01ba registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-05 04:16:18.452180 | orchestrator | 23c52cb06ad3 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-05 04:16:18.452186 | orchestrator | 9ad8f4e37271 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-05 04:16:18.452192 | orchestrator | d1d7c8803e95 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-05 04:16:18.452198 | orchestrator | b22932f064ce registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-05 04:16:18.452203 | orchestrator | ee32479ded59 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-05 04:16:18.452211 | orchestrator | 6a6a4de988c2 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-05 04:16:18.452217 | orchestrator | 2e952fda0752 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-05 04:16:18.452224 | orchestrator | 221f72aa16fe registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-05 04:16:18.452229 | orchestrator | 314dd9819a5d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-05 04:16:18.452235 | orchestrator | b26297030cff registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-05 04:16:18.882342 | orchestrator | 2026-04-05 04:16:18.882465 | orchestrator | ## Images @ testbed-node-0 2026-04-05 04:16:18.882486 | orchestrator | 2026-04-05 04:16:18.882503 | orchestrator | + echo 2026-04-05 04:16:18.882520 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-05 04:16:18.882537 | orchestrator | + echo 2026-04-05 04:16:18.882552 | orchestrator | + osism container testbed-node-0 images 2026-04-05 04:16:21.495163 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 04:16:21.495264 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-05 04:16:21.495276 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-05 04:16:21.495285 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-05 04:16:21.495330 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-05 04:16:21.495349 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-05 04:16:21.495357 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 04:16:21.495364 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 04:16:21.495372 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-05 04:16:21.495384 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-05 04:16:21.495396 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-05 04:16:21.495407 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 04:16:21.495418 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-05 04:16:21.495429 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-05 04:16:21.495440 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-05 04:16:21.495453 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-05 04:16:21.495465 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-05 04:16:21.495478 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-05 04:16:21.495502 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 04:16:21.495515 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-05 04:16:21.495553 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 04:16:21.495561 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-05 04:16:21.495568 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-05 04:16:21.495576 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-05 04:16:21.495583 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-05 04:16:21.495591 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-05 04:16:21.495598 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-05 04:16:21.495605 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-05 04:16:21.495613 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-05 04:16:21.495620 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-05 04:16:21.495635 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-05 04:16:21.495642 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-05 04:16:21.495666 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-05 04:16:21.495674 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-05 04:16:21.495681 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-05 04:16:21.495689 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-05 04:16:21.495697 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-05 04:16:21.495709 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-05 04:16:21.495720 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-05 04:16:21.495732 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-05 04:16:21.495744 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-05 04:16:21.495756 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-05 04:16:21.495767 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-05 04:16:21.495778 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-05 04:16:21.495791 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-05 04:16:21.495802 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-05 04:16:21.495814 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-05 04:16:21.495833 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-05 04:16:21.495846 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-05 04:16:21.495858 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-05 04:16:21.495871 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-05 04:16:21.495883 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-05 04:16:21.495893 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-05 04:16:21.495900 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-05 04:16:21.495939 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-05 04:16:21.495947 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-05 04:16:21.495962 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-05 04:16:21.495969 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-05 04:16:21.495977 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-05 04:16:21.495984 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-05 04:16:21.495991 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-05 04:16:21.495998 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-05 04:16:21.496005 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-05 04:16:21.496012 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-05 04:16:21.496027 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-05 04:16:21.496034 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-05 04:16:21.496042 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-05 04:16:21.496049 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-05 04:16:21.496056 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-05 04:16:21.496063 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-05 04:16:21.901642 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 04:16:21.902234 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-05 04:16:21.965929 | orchestrator | 2026-04-05 04:16:21.966097 | orchestrator | ## Containers @ testbed-node-1 2026-04-05 04:16:21.966121 | orchestrator | 2026-04-05 04:16:21.966134 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 04:16:21.966146 | orchestrator | + echo 2026-04-05 04:16:21.966158 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-05 04:16:21.966170 | orchestrator | + echo 2026-04-05 04:16:21.966182 | orchestrator | + osism container testbed-node-1 ps 2026-04-05 04:16:24.488681 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 04:16:24.488765 | orchestrator | 44b6c048dd99 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-05 04:16:24.488776 | orchestrator | 3cbb11023a76 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-05 04:16:24.488785 | orchestrator | ca049fff1931 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-05 04:16:24.488792 | orchestrator | e9a0582a4f84 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-04-05 04:16:24.488818 | orchestrator | 8e91f50bbdef registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-05 04:16:24.488847 | orchestrator | 08c88edd6ba5 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-05 04:16:24.488856 | orchestrator | 5da8ff1419d5 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-05 04:16:24.488870 | orchestrator | 5a677e9f48a3 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-05 04:16:24.488883 | orchestrator | 0d29bbb2b4ee registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-04-05 04:16:24.488899 | orchestrator | cfb51f4fb049 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-05 04:16:24.489011 | orchestrator | 43b1fa402bf8 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-05 04:16:24.489026 | orchestrator | 0848bd9becce registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-05 04:16:24.489037 | orchestrator | 0f09c8093ca4 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-05 04:16:24.489049 | orchestrator | 158dc1cdf415 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-05 04:16:24.489061 | orchestrator | 944f81caaa83 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-04-05 04:16:24.489072 | orchestrator | 12eead34ec32 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-04-05 04:16:24.489085 | orchestrator | aed3e789e5eb registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes ceilometer_central 2026-04-05 04:16:24.489977 | orchestrator | e1ee4d123857 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) ceilometer_notification 2026-04-05 04:16:24.490014 | orchestrator | 5d91e44c6168 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-05 04:16:24.490075 | orchestrator | 3c9dca306512 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-05 04:16:24.490089 | orchestrator | 771cc8307b77 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_health_manager 2026-04-05 04:16:24.490098 | orchestrator | a52f9e0a8b28 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes octavia_driver_agent 2026-04-05 04:16:24.490105 | orchestrator | 660a63ea78db registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-05 04:16:24.490125 | orchestrator | f57f604dbf61 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-05 04:16:24.490133 | orchestrator | 509a58a61800 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_mdns 2026-04-05 04:16:24.490140 | orchestrator | c5d3a2ff2aa7 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_producer 2026-04-05 04:16:24.490147 | orchestrator | 756418217de7 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-05 04:16:24.490163 | orchestrator | 1f5f285bb1ba registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-05 04:16:24.490171 | orchestrator | 62867a7622c2 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-05 04:16:24.490178 | orchestrator | 45f5bcb4db2e registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-05 04:16:24.490185 | orchestrator | 9b455e7a2ecd registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) barbican_keystone_listener 2026-04-05 04:16:24.490193 | orchestrator | 2c4f6637b9ab registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) barbican_api 2026-04-05 04:16:24.490200 | orchestrator | 7846f9ba5c61 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-05 04:16:24.490207 | orchestrator | 4c3fa6070596 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-05 04:16:24.490214 | orchestrator | 9fa9aae3fd8f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-05 04:16:24.490222 | orchestrator | ceb7809c63ba registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-05 04:16:24.490229 | orchestrator | 1f893000a458 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-05 04:16:24.490236 | orchestrator | fca2c833aa38 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-05 04:16:24.490254 | orchestrator | b4dc7e1e1d84 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-05 04:16:24.490262 | orchestrator | 1f33e837d3b4 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) horizon 2026-04-05 04:16:24.490269 | orchestrator | 5cfcf6b512a3 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-05 04:16:24.490281 | orchestrator | 5655aa60876f registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-05 04:16:24.490288 | orchestrator | 737201f5319b registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_api 2026-04-05 04:16:24.490295 | orchestrator | 00f33cd6c472 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-05 04:16:24.490302 | orchestrator | 58c608bd4ea1 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) neutron_server 2026-04-05 04:16:24.490310 | orchestrator | 5c2191cfb7b9 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) placement_api 2026-04-05 04:16:24.490317 | orchestrator | d179ec352509 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone 2026-04-05 04:16:24.490324 | orchestrator | 4d075763ff3c registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_fernet 2026-04-05 04:16:24.490331 | orchestrator | c81ea1f7b4cd registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_ssh 2026-04-05 04:16:24.490338 | orchestrator | 114a7f0e4f41 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" About an hour ago Up About an hour ceph-mgr-testbed-node-1 2026-04-05 04:16:24.490345 | orchestrator | dad71ff68e9b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-05 04:16:24.490353 | orchestrator | 0027b45af4f3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-04-05 04:16:24.490360 | orchestrator | 2a9d6753a582 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-05 04:16:24.490367 | orchestrator | 2859475023bf registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-05 04:16:24.490378 | orchestrator | f13d9bdb585d registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-05 04:16:24.490385 | orchestrator | 30809eb1e34d registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-05 04:16:24.490393 | orchestrator | d614c226d947 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-05 04:16:24.490400 | orchestrator | 40550d472c99 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-05 04:16:24.490416 | orchestrator | b663b4d1c38b registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-05 04:16:24.490423 | orchestrator | fccaed6c65ec registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-05 04:16:24.490431 | orchestrator | 44fb1c7f8a08 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-05 04:16:24.490438 | orchestrator | 429e78cbed24 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-05 04:16:24.490445 | orchestrator | c246e8f730fe registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-05 04:16:24.490452 | orchestrator | 872b922b907a registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-05 04:16:24.490459 | orchestrator | a281e419b014 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-05 04:16:24.490466 | orchestrator | b4dbc48c46ef registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-05 04:16:24.490473 | orchestrator | d0856fe54e9d registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-05 04:16:24.490481 | orchestrator | 0fafa385aa9e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-05 04:16:24.490488 | orchestrator | d930562d74b0 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-05 04:16:24.490495 | orchestrator | 286747fa344f registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-05 04:16:24.490503 | orchestrator | 3f3385035dd7 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-05 04:16:24.898316 | orchestrator | 2026-04-05 04:16:24.898418 | orchestrator | ## Images @ testbed-node-1 2026-04-05 04:16:24.898430 | orchestrator | 2026-04-05 04:16:24.898438 | orchestrator | + echo 2026-04-05 04:16:24.898447 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-05 04:16:24.898456 | orchestrator | + echo 2026-04-05 04:16:24.898464 | orchestrator | + osism container testbed-node-1 images 2026-04-05 04:16:27.571409 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 04:16:27.571516 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-05 04:16:27.571532 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-05 04:16:27.571543 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-05 04:16:27.571554 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-05 04:16:27.571566 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-05 04:16:27.571601 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 04:16:27.571613 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 04:16:27.571624 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-05 04:16:27.571634 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-05 04:16:27.571645 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-05 04:16:27.571656 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 04:16:27.571666 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-05 04:16:27.571677 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-05 04:16:27.571687 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-05 04:16:27.571697 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-05 04:16:27.571707 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-05 04:16:27.571717 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-05 04:16:27.571728 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 04:16:27.571738 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-05 04:16:27.571767 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 04:16:27.571780 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-05 04:16:27.571790 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-05 04:16:27.571800 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-05 04:16:27.571811 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-05 04:16:27.571822 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-05 04:16:27.571833 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-05 04:16:27.571848 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-05 04:16:27.571859 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-05 04:16:27.571868 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-05 04:16:27.571878 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-05 04:16:27.571889 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-05 04:16:27.571954 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-05 04:16:27.571980 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-05 04:16:27.571991 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-05 04:16:27.572004 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-05 04:16:27.572016 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-05 04:16:27.572026 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-05 04:16:27.572037 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-05 04:16:27.572057 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-05 04:16:27.572067 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-05 04:16:27.572078 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-05 04:16:27.572088 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-05 04:16:27.572098 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-05 04:16:27.572109 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-05 04:16:27.572119 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-05 04:16:27.572129 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-05 04:16:27.572138 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-05 04:16:27.572149 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-05 04:16:27.572159 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-05 04:16:27.572169 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-05 04:16:27.572179 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-05 04:16:27.572190 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-05 04:16:27.572202 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-05 04:16:27.572212 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-05 04:16:27.572222 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-05 04:16:27.572233 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-05 04:16:27.572243 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-05 04:16:27.572264 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-05 04:16:27.572274 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-05 04:16:27.572284 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-05 04:16:27.572292 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-05 04:16:27.572301 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-05 04:16:27.572310 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-05 04:16:27.572332 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-05 04:16:27.572343 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-05 04:16:27.572354 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-05 04:16:27.572364 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-05 04:16:27.572374 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-05 04:16:27.572385 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-05 04:16:28.029422 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 04:16:28.030184 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-05 04:16:28.092294 | orchestrator | 2026-04-05 04:16:28.092387 | orchestrator | ## Containers @ testbed-node-2 2026-04-05 04:16:28.092398 | orchestrator | 2026-04-05 04:16:28.092405 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 04:16:28.092412 | orchestrator | + echo 2026-04-05 04:16:28.092419 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-05 04:16:28.092427 | orchestrator | + echo 2026-04-05 04:16:28.092433 | orchestrator | + osism container testbed-node-2 ps 2026-04-05 04:16:30.713582 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 04:16:30.713668 | orchestrator | 60a1c036249a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-05 04:16:30.713680 | orchestrator | 3295a1c85898 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-05 04:16:30.713688 | orchestrator | ae767da9f671 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-05 04:16:30.713696 | orchestrator | 678dbba8ecf1 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-04-05 04:16:30.713705 | orchestrator | 2c5ea77c5d8a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-05 04:16:30.713712 | orchestrator | 8d7ac893b7c8 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-05 04:16:30.713720 | orchestrator | 385466ec06eb registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-05 04:16:30.713743 | orchestrator | 5cad3b660aab registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_node_exporter 2026-04-05 04:16:30.713751 | orchestrator | 5887f67b02aa registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-04-05 04:16:30.713759 | orchestrator | 8ee22e720f3d registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-05 04:16:30.713766 | orchestrator | 9befa270652f registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-05 04:16:30.713776 | orchestrator | 0a5bc954133a registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-05 04:16:30.713784 | orchestrator | 45c108a1c6f1 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-05 04:16:30.713791 | orchestrator | ae01574164e9 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_listener 2026-04-05 04:16:30.713798 | orchestrator | 5ffa900f9ccc registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-04-05 04:16:30.713805 | orchestrator | 10ca8e9bb263 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_api 2026-04-05 04:16:30.713812 | orchestrator | 2ab138bf96f3 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes ceilometer_central 2026-04-05 04:16:30.713820 | orchestrator | feef0c769a6b registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) ceilometer_notification 2026-04-05 04:16:30.713827 | orchestrator | 87c6a0341ea2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-05 04:16:30.713847 | orchestrator | 6eac6cddde98 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-05 04:16:30.713855 | orchestrator | 14873fe92f3e registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_health_manager 2026-04-05 04:16:30.713862 | orchestrator | 8ce0cb700e70 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes octavia_driver_agent 2026-04-05 04:16:30.713870 | orchestrator | 5021835d1722 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) octavia_api 2026-04-05 04:16:30.713877 | orchestrator | 313869f4b11a registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_worker 2026-04-05 04:16:30.713888 | orchestrator | a98e563d4823 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_mdns 2026-04-05 04:16:30.713895 | orchestrator | 8f291475d7c0 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_producer 2026-04-05 04:16:30.713902 | orchestrator | 6ceb5addbb1a registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_central 2026-04-05 04:16:30.713938 | orchestrator | 9a789974da27 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-05 04:16:30.713946 | orchestrator | 9e7e74f4a403 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-05 04:16:30.713953 | orchestrator | d7dd96818220 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) barbican_worker 2026-04-05 04:16:30.713960 | orchestrator | b227a938d5a6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) barbican_keystone_listener 2026-04-05 04:16:30.713967 | orchestrator | 370616088ebc registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) barbican_api 2026-04-05 04:16:30.713975 | orchestrator | 4a36788269d9 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-05 04:16:30.713982 | orchestrator | 0f2204c91a9c registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-05 04:16:30.713989 | orchestrator | 9d483dc21c6e registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-05 04:16:30.713996 | orchestrator | a95de148a94c registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-05 04:16:30.714003 | orchestrator | d4f8f1a152a8 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 35 minutes (healthy) glance_api 2026-04-05 04:16:30.714010 | orchestrator | e9528a3590c8 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-05 04:16:30.714070 | orchestrator | 00c401fd300f registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) skyline_apiserver 2026-04-05 04:16:30.714088 | orchestrator | 6f915d8642c4 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) horizon 2026-04-05 04:16:30.714095 | orchestrator | f69c476644c5 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_novncproxy 2026-04-05 04:16:30.714102 | orchestrator | fe3ae25acb7d registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-05 04:16:30.714115 | orchestrator | 220f6ca8d32e registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_api 2026-04-05 04:16:30.714123 | orchestrator | ae437e4e976d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-05 04:16:30.714131 | orchestrator | c95401721f26 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) neutron_server 2026-04-05 04:16:30.714139 | orchestrator | cbef4b6c4342 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) placement_api 2026-04-05 04:16:30.714147 | orchestrator | 52a97c9a362e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone 2026-04-05 04:16:30.714156 | orchestrator | f721c8759e9b registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_fernet 2026-04-05 04:16:30.714164 | orchestrator | 4a18699a4c66 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) keystone_ssh 2026-04-05 04:16:30.714173 | orchestrator | 067fe773578f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" About an hour ago Up About an hour ceph-mgr-testbed-node-2 2026-04-05 04:16:30.714181 | orchestrator | b4125275ca8b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-05 04:16:30.714190 | orchestrator | d0e8f8775caf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-04-05 04:16:30.714201 | orchestrator | 5e415de55936 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-05 04:16:30.714209 | orchestrator | d7985107fc9b registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-05 04:16:30.714218 | orchestrator | c77a9d1cd2c9 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-05 04:16:30.714226 | orchestrator | 3d69d97c5094 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-05 04:16:30.714235 | orchestrator | 7289aa02d316 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-05 04:16:30.714243 | orchestrator | e98003c3a41a registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-05 04:16:30.714251 | orchestrator | a982697e8bde registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-05 04:16:30.714264 | orchestrator | 7d10ccc83a05 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-05 04:16:30.714276 | orchestrator | 555d18ba6d9c registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-05 04:16:30.714285 | orchestrator | 7d396bfb2486 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-05 04:16:30.714293 | orchestrator | 86bd7be662b6 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-05 04:16:30.714302 | orchestrator | 0ace96af5fbe registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-05 04:16:30.714310 | orchestrator | f4affb064ff0 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-05 04:16:30.714321 | orchestrator | e682d8c206c1 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-05 04:16:30.714332 | orchestrator | 5f0484bedcaa registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-05 04:16:30.714344 | orchestrator | e49ec6a65286 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-05 04:16:30.714356 | orchestrator | ed42ab6b134f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-05 04:16:30.714367 | orchestrator | dfa31d8c66c5 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-05 04:16:30.714379 | orchestrator | 7af208331354 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-05 04:16:31.177862 | orchestrator | 2026-04-05 04:16:31.178120 | orchestrator | ## Images @ testbed-node-2 2026-04-05 04:16:31.178147 | orchestrator | 2026-04-05 04:16:31.178157 | orchestrator | + echo 2026-04-05 04:16:31.178165 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-05 04:16:31.178174 | orchestrator | + echo 2026-04-05 04:16:31.178182 | orchestrator | + osism container testbed-node-2 images 2026-04-05 04:16:33.816584 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 04:16:33.816695 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-05 04:16:33.816709 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-05 04:16:33.816718 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-05 04:16:33.816726 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-05 04:16:33.816733 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-05 04:16:33.816740 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 04:16:33.816747 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 04:16:33.816776 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-05 04:16:33.816784 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-05 04:16:33.816791 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-05 04:16:33.816803 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 04:16:33.816810 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-05 04:16:33.816818 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-05 04:16:33.816825 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-05 04:16:33.816847 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-05 04:16:33.816855 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-05 04:16:33.816862 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-05 04:16:33.816869 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 04:16:33.816876 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-05 04:16:33.816883 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 04:16:33.816890 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-05 04:16:33.816899 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-05 04:16:33.816945 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-05 04:16:33.816958 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-05 04:16:33.816970 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-05 04:16:33.816981 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-05 04:16:33.816992 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-05 04:16:33.817004 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-05 04:16:33.817015 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-05 04:16:33.817026 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-05 04:16:33.817038 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-05 04:16:33.817071 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-05 04:16:33.817084 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-05 04:16:33.817113 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-05 04:16:33.817137 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-05 04:16:33.817149 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-05 04:16:33.817162 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-05 04:16:33.817175 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-05 04:16:33.817188 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-05 04:16:33.817199 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-05 04:16:33.817211 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-05 04:16:33.817223 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-05 04:16:33.817235 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-05 04:16:33.817247 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-05 04:16:33.817259 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-05 04:16:33.817272 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-05 04:16:33.817287 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-05 04:16:33.817303 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-05 04:16:33.817316 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-05 04:16:33.817328 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-05 04:16:33.817341 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-05 04:16:33.817354 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-05 04:16:33.817367 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-05 04:16:33.817380 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-05 04:16:33.817393 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-05 04:16:33.817406 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-05 04:16:33.817419 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-05 04:16:33.817432 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-05 04:16:33.817444 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-05 04:16:33.817465 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-05 04:16:33.817478 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-05 04:16:33.817491 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-05 04:16:33.817503 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-05 04:16:33.817525 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-05 04:16:33.817538 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-05 04:16:33.817550 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-05 04:16:33.817563 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-05 04:16:33.817577 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-05 04:16:33.817589 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-05 04:16:34.238467 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-05 04:16:34.246490 | orchestrator | + set -e 2026-04-05 04:16:34.246564 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 04:16:34.246573 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 04:16:34.246580 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 04:16:34.246586 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 04:16:34.246592 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 04:16:34.246599 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 04:16:34.246606 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 04:16:34.246615 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:16:34.246626 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:16:34.246640 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 04:16:34.246653 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 04:16:34.246663 | orchestrator | ++ export ARA=false 2026-04-05 04:16:34.246673 | orchestrator | ++ ARA=false 2026-04-05 04:16:34.246684 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 04:16:34.246694 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 04:16:34.246703 | orchestrator | ++ export TEMPEST=false 2026-04-05 04:16:34.246712 | orchestrator | ++ TEMPEST=false 2026-04-05 04:16:34.246722 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 04:16:34.246731 | orchestrator | ++ IS_ZUUL=true 2026-04-05 04:16:34.246741 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:16:34.246752 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:16:34.246762 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 04:16:34.246771 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 04:16:34.246781 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 04:16:34.246791 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 04:16:34.246803 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 04:16:34.246814 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 04:16:34.246824 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 04:16:34.246834 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 04:16:34.246844 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 04:16:34.246855 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-05 04:16:34.259213 | orchestrator | + set -e 2026-04-05 04:16:34.259284 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:16:34.259305 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:16:34.259314 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:16:34.259322 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:16:34.259543 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:16:34.259562 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 04:16:34.260150 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:16:34.265096 | orchestrator | 2026-04-05 04:16:34.265168 | orchestrator | # Ceph status 2026-04-05 04:16:34.265182 | orchestrator | 2026-04-05 04:16:34.265193 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:16:34.265205 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:16:34.265217 | orchestrator | + echo 2026-04-05 04:16:34.265228 | orchestrator | + echo '# Ceph status' 2026-04-05 04:16:34.265240 | orchestrator | + echo 2026-04-05 04:16:34.265251 | orchestrator | + ceph -s 2026-04-05 04:16:34.915312 | orchestrator | cluster: 2026-04-05 04:16:34.915403 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-05 04:16:34.915412 | orchestrator | health: HEALTH_OK 2026-04-05 04:16:34.915420 | orchestrator | 2026-04-05 04:16:34.915426 | orchestrator | services: 2026-04-05 04:16:34.915432 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 73m) 2026-04-05 04:16:34.915441 | orchestrator | mgr: testbed-node-0(active, since 60m), standbys: testbed-node-1, testbed-node-2 2026-04-05 04:16:34.915449 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-05 04:16:34.915456 | orchestrator | osd: 6 osds: 6 up (since 69m), 6 in (since 70m) 2026-04-05 04:16:34.915463 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-05 04:16:34.915470 | orchestrator | 2026-04-05 04:16:34.915477 | orchestrator | data: 2026-04-05 04:16:34.915482 | orchestrator | volumes: 1/1 healthy 2026-04-05 04:16:34.915489 | orchestrator | pools: 14 pools, 401 pgs 2026-04-05 04:16:34.915496 | orchestrator | objects: 555 objects, 2.2 GiB 2026-04-05 04:16:34.915502 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-05 04:16:34.915509 | orchestrator | pgs: 401 active+clean 2026-04-05 04:16:34.915515 | orchestrator | 2026-04-05 04:16:34.975288 | orchestrator | 2026-04-05 04:16:34.975355 | orchestrator | # Ceph versions 2026-04-05 04:16:34.975361 | orchestrator | 2026-04-05 04:16:34.975366 | orchestrator | + echo 2026-04-05 04:16:34.975370 | orchestrator | + echo '# Ceph versions' 2026-04-05 04:16:34.975375 | orchestrator | + echo 2026-04-05 04:16:34.975380 | orchestrator | + ceph versions 2026-04-05 04:16:35.614506 | orchestrator | { 2026-04-05 04:16:35.614613 | orchestrator | "mon": { 2026-04-05 04:16:35.614629 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 04:16:35.614641 | orchestrator | }, 2026-04-05 04:16:35.614653 | orchestrator | "mgr": { 2026-04-05 04:16:35.614664 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 04:16:35.614675 | orchestrator | }, 2026-04-05 04:16:35.614686 | orchestrator | "osd": { 2026-04-05 04:16:35.614697 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-05 04:16:35.614707 | orchestrator | }, 2026-04-05 04:16:35.614718 | orchestrator | "mds": { 2026-04-05 04:16:35.614729 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 04:16:35.614739 | orchestrator | }, 2026-04-05 04:16:35.614750 | orchestrator | "rgw": { 2026-04-05 04:16:35.614760 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 04:16:35.614771 | orchestrator | }, 2026-04-05 04:16:35.614782 | orchestrator | "overall": { 2026-04-05 04:16:35.614815 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-05 04:16:35.614827 | orchestrator | } 2026-04-05 04:16:35.614838 | orchestrator | } 2026-04-05 04:16:35.665181 | orchestrator | 2026-04-05 04:16:35.665293 | orchestrator | # Ceph OSD tree 2026-04-05 04:16:35.665319 | orchestrator | 2026-04-05 04:16:35.665338 | orchestrator | + echo 2026-04-05 04:16:35.665359 | orchestrator | + echo '# Ceph OSD tree' 2026-04-05 04:16:35.665382 | orchestrator | + echo 2026-04-05 04:16:35.665401 | orchestrator | + ceph osd df tree 2026-04-05 04:16:36.205547 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-05 04:16:36.205668 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 385 MiB 113 GiB 5.88 1.00 - root default 2026-04-05 04:16:36.205685 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-04-05 04:16:36.205697 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.83 0.99 192 up osd.1 2026-04-05 04:16:36.205708 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.91 1.00 196 up osd.4 2026-04-05 04:16:36.205744 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-04-05 04:16:36.205771 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 62 MiB 19 GiB 5.08 0.86 174 up osd.0 2026-04-05 04:16:36.205782 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.65 1.13 218 up osd.3 2026-04-05 04:16:36.205793 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-04-05 04:16:36.205805 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.85 0.99 195 up osd.2 2026-04-05 04:16:36.205816 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.97 1.01 195 up osd.5 2026-04-05 04:16:36.205827 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 385 MiB 113 GiB 5.88 2026-04-05 04:16:36.205838 | orchestrator | MIN/MAX VAR: 0.86/1.13 STDDEV: 0.45 2026-04-05 04:16:36.260302 | orchestrator | 2026-04-05 04:16:36.260380 | orchestrator | # Ceph monitor status 2026-04-05 04:16:36.260391 | orchestrator | 2026-04-05 04:16:36.260400 | orchestrator | + echo 2026-04-05 04:16:36.260408 | orchestrator | + echo '# Ceph monitor status' 2026-04-05 04:16:36.260416 | orchestrator | + echo 2026-04-05 04:16:36.260424 | orchestrator | + ceph mon stat 2026-04-05 04:16:36.936768 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-05 04:16:36.989736 | orchestrator | 2026-04-05 04:16:36.989813 | orchestrator | # Ceph quorum status 2026-04-05 04:16:36.989823 | orchestrator | 2026-04-05 04:16:36.989831 | orchestrator | + echo 2026-04-05 04:16:36.989838 | orchestrator | + echo '# Ceph quorum status' 2026-04-05 04:16:36.989846 | orchestrator | + echo 2026-04-05 04:16:36.991055 | orchestrator | + ceph quorum_status 2026-04-05 04:16:36.991132 | orchestrator | + jq 2026-04-05 04:16:37.684481 | orchestrator | { 2026-04-05 04:16:37.684651 | orchestrator | "election_epoch": 4, 2026-04-05 04:16:37.684673 | orchestrator | "quorum": [ 2026-04-05 04:16:37.684685 | orchestrator | 0, 2026-04-05 04:16:37.684696 | orchestrator | 1, 2026-04-05 04:16:37.684706 | orchestrator | 2 2026-04-05 04:16:37.684717 | orchestrator | ], 2026-04-05 04:16:37.684728 | orchestrator | "quorum_names": [ 2026-04-05 04:16:37.684738 | orchestrator | "testbed-node-0", 2026-04-05 04:16:37.684749 | orchestrator | "testbed-node-1", 2026-04-05 04:16:37.684760 | orchestrator | "testbed-node-2" 2026-04-05 04:16:37.684770 | orchestrator | ], 2026-04-05 04:16:37.684781 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-05 04:16:37.684793 | orchestrator | "quorum_age": 4400, 2026-04-05 04:16:37.684804 | orchestrator | "features": { 2026-04-05 04:16:37.684815 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-05 04:16:37.684825 | orchestrator | "quorum_mon": [ 2026-04-05 04:16:37.684836 | orchestrator | "kraken", 2026-04-05 04:16:37.684846 | orchestrator | "luminous", 2026-04-05 04:16:37.684857 | orchestrator | "mimic", 2026-04-05 04:16:37.684868 | orchestrator | "osdmap-prune", 2026-04-05 04:16:37.684878 | orchestrator | "nautilus", 2026-04-05 04:16:37.684889 | orchestrator | "octopus", 2026-04-05 04:16:37.684899 | orchestrator | "pacific", 2026-04-05 04:16:37.684967 | orchestrator | "elector-pinging", 2026-04-05 04:16:37.684978 | orchestrator | "quincy", 2026-04-05 04:16:37.684990 | orchestrator | "reef" 2026-04-05 04:16:37.685001 | orchestrator | ] 2026-04-05 04:16:37.685011 | orchestrator | }, 2026-04-05 04:16:37.685022 | orchestrator | "monmap": { 2026-04-05 04:16:37.685033 | orchestrator | "epoch": 1, 2026-04-05 04:16:37.685044 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-05 04:16:37.685058 | orchestrator | "modified": "2026-04-05T03:03:05.197063Z", 2026-04-05 04:16:37.685071 | orchestrator | "created": "2026-04-05T03:03:05.197063Z", 2026-04-05 04:16:37.685084 | orchestrator | "min_mon_release": 18, 2026-04-05 04:16:37.685097 | orchestrator | "min_mon_release_name": "reef", 2026-04-05 04:16:37.685109 | orchestrator | "election_strategy": 1, 2026-04-05 04:16:37.685121 | orchestrator | "disallowed_leaders: ": "", 2026-04-05 04:16:37.685133 | orchestrator | "stretch_mode": false, 2026-04-05 04:16:37.685176 | orchestrator | "tiebreaker_mon": "", 2026-04-05 04:16:37.685189 | orchestrator | "removed_ranks: ": "", 2026-04-05 04:16:37.685202 | orchestrator | "features": { 2026-04-05 04:16:37.685214 | orchestrator | "persistent": [ 2026-04-05 04:16:37.685227 | orchestrator | "kraken", 2026-04-05 04:16:37.685239 | orchestrator | "luminous", 2026-04-05 04:16:37.685251 | orchestrator | "mimic", 2026-04-05 04:16:37.685263 | orchestrator | "osdmap-prune", 2026-04-05 04:16:37.685276 | orchestrator | "nautilus", 2026-04-05 04:16:37.685288 | orchestrator | "octopus", 2026-04-05 04:16:37.685301 | orchestrator | "pacific", 2026-04-05 04:16:37.685313 | orchestrator | "elector-pinging", 2026-04-05 04:16:37.685325 | orchestrator | "quincy", 2026-04-05 04:16:37.685337 | orchestrator | "reef" 2026-04-05 04:16:37.685351 | orchestrator | ], 2026-04-05 04:16:37.685363 | orchestrator | "optional": [] 2026-04-05 04:16:37.685376 | orchestrator | }, 2026-04-05 04:16:37.685389 | orchestrator | "mons": [ 2026-04-05 04:16:37.685402 | orchestrator | { 2026-04-05 04:16:37.685414 | orchestrator | "rank": 0, 2026-04-05 04:16:37.685425 | orchestrator | "name": "testbed-node-0", 2026-04-05 04:16:37.685436 | orchestrator | "public_addrs": { 2026-04-05 04:16:37.685446 | orchestrator | "addrvec": [ 2026-04-05 04:16:37.685457 | orchestrator | { 2026-04-05 04:16:37.685468 | orchestrator | "type": "v2", 2026-04-05 04:16:37.685479 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-05 04:16:37.685490 | orchestrator | "nonce": 0 2026-04-05 04:16:37.685501 | orchestrator | }, 2026-04-05 04:16:37.685512 | orchestrator | { 2026-04-05 04:16:37.685522 | orchestrator | "type": "v1", 2026-04-05 04:16:37.685533 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-05 04:16:37.685544 | orchestrator | "nonce": 0 2026-04-05 04:16:37.685555 | orchestrator | } 2026-04-05 04:16:37.685565 | orchestrator | ] 2026-04-05 04:16:37.685576 | orchestrator | }, 2026-04-05 04:16:37.685588 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-05 04:16:37.685607 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-05 04:16:37.685624 | orchestrator | "priority": 0, 2026-04-05 04:16:37.685636 | orchestrator | "weight": 0, 2026-04-05 04:16:37.685646 | orchestrator | "crush_location": "{}" 2026-04-05 04:16:37.685663 | orchestrator | }, 2026-04-05 04:16:37.685680 | orchestrator | { 2026-04-05 04:16:37.685705 | orchestrator | "rank": 1, 2026-04-05 04:16:37.685726 | orchestrator | "name": "testbed-node-1", 2026-04-05 04:16:37.685743 | orchestrator | "public_addrs": { 2026-04-05 04:16:37.685761 | orchestrator | "addrvec": [ 2026-04-05 04:16:37.685780 | orchestrator | { 2026-04-05 04:16:37.685796 | orchestrator | "type": "v2", 2026-04-05 04:16:37.685814 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-05 04:16:37.685833 | orchestrator | "nonce": 0 2026-04-05 04:16:37.685851 | orchestrator | }, 2026-04-05 04:16:37.685869 | orchestrator | { 2026-04-05 04:16:37.685888 | orchestrator | "type": "v1", 2026-04-05 04:16:37.685934 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-05 04:16:37.685955 | orchestrator | "nonce": 0 2026-04-05 04:16:37.685974 | orchestrator | } 2026-04-05 04:16:37.685994 | orchestrator | ] 2026-04-05 04:16:37.686015 | orchestrator | }, 2026-04-05 04:16:37.686120 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-05 04:16:37.686140 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-05 04:16:37.686159 | orchestrator | "priority": 0, 2026-04-05 04:16:37.686179 | orchestrator | "weight": 0, 2026-04-05 04:16:37.686197 | orchestrator | "crush_location": "{}" 2026-04-05 04:16:37.686215 | orchestrator | }, 2026-04-05 04:16:37.686236 | orchestrator | { 2026-04-05 04:16:37.686255 | orchestrator | "rank": 2, 2026-04-05 04:16:37.686276 | orchestrator | "name": "testbed-node-2", 2026-04-05 04:16:37.686294 | orchestrator | "public_addrs": { 2026-04-05 04:16:37.686312 | orchestrator | "addrvec": [ 2026-04-05 04:16:37.686330 | orchestrator | { 2026-04-05 04:16:37.686347 | orchestrator | "type": "v2", 2026-04-05 04:16:37.686366 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-05 04:16:37.686387 | orchestrator | "nonce": 0 2026-04-05 04:16:37.686405 | orchestrator | }, 2026-04-05 04:16:37.686425 | orchestrator | { 2026-04-05 04:16:37.686445 | orchestrator | "type": "v1", 2026-04-05 04:16:37.686465 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-05 04:16:37.686486 | orchestrator | "nonce": 0 2026-04-05 04:16:37.686528 | orchestrator | } 2026-04-05 04:16:37.686549 | orchestrator | ] 2026-04-05 04:16:37.686571 | orchestrator | }, 2026-04-05 04:16:37.686592 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-05 04:16:37.686611 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-05 04:16:37.686630 | orchestrator | "priority": 0, 2026-04-05 04:16:37.686671 | orchestrator | "weight": 0, 2026-04-05 04:16:37.686691 | orchestrator | "crush_location": "{}" 2026-04-05 04:16:37.686711 | orchestrator | } 2026-04-05 04:16:37.686731 | orchestrator | ] 2026-04-05 04:16:37.686749 | orchestrator | } 2026-04-05 04:16:37.686769 | orchestrator | } 2026-04-05 04:16:37.686790 | orchestrator | 2026-04-05 04:16:37.686809 | orchestrator | # Ceph free space status 2026-04-05 04:16:37.686827 | orchestrator | + echo 2026-04-05 04:16:37.686847 | orchestrator | + echo '# Ceph free space status' 2026-04-05 04:16:37.686867 | orchestrator | + echo 2026-04-05 04:16:37.686976 | orchestrator | 2026-04-05 04:16:37.687001 | orchestrator | + ceph df 2026-04-05 04:16:38.348777 | orchestrator | --- RAW STORAGE --- 2026-04-05 04:16:38.348902 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-05 04:16:38.348993 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-04-05 04:16:38.349004 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-04-05 04:16:38.349014 | orchestrator | 2026-04-05 04:16:38.349029 | orchestrator | --- POOLS --- 2026-04-05 04:16:38.349047 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-05 04:16:38.349063 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-05 04:16:38.349079 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-05 04:16:38.349092 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-05 04:16:38.349106 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-05 04:16:38.349120 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-05 04:16:38.349135 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-05 04:16:38.349150 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-05 04:16:38.349164 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-05 04:16:38.349179 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-04-05 04:16:38.349194 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 04:16:38.349227 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 04:16:38.349243 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2026-04-05 04:16:38.349272 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 04:16:38.349289 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 04:16:38.404116 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-05 04:16:38.467424 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 04:16:38.467525 | orchestrator | + osism apply facts 2026-04-05 04:16:50.910010 | orchestrator | 2026-04-05 04:16:50 | INFO  | Task f09955ff-7eb8-4f17-92c8-06b3f2388bab (facts) was prepared for execution. 2026-04-05 04:16:50.910186 | orchestrator | 2026-04-05 04:16:50 | INFO  | It takes a moment until task f09955ff-7eb8-4f17-92c8-06b3f2388bab (facts) has been started and output is visible here. 2026-04-05 04:17:05.599081 | orchestrator | 2026-04-05 04:17:05.599208 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 04:17:05.599216 | orchestrator | 2026-04-05 04:17:05.599221 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 04:17:05.599225 | orchestrator | Sunday 05 April 2026 04:16:55 +0000 (0:00:00.325) 0:00:00.325 ********** 2026-04-05 04:17:05.599229 | orchestrator | ok: [testbed-manager] 2026-04-05 04:17:05.599234 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:05.599238 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:17:05.599241 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:17:05.599245 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:17:05.599249 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:17:05.599268 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:17:05.599272 | orchestrator | 2026-04-05 04:17:05.599276 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 04:17:05.599280 | orchestrator | Sunday 05 April 2026 04:16:57 +0000 (0:00:01.303) 0:00:01.629 ********** 2026-04-05 04:17:05.599283 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:17:05.599288 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:05.599292 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:17:05.599295 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:17:05.599299 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:17:05.599303 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:17:05.599306 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:17:05.599310 | orchestrator | 2026-04-05 04:17:05.599314 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 04:17:05.599318 | orchestrator | 2026-04-05 04:17:05.599321 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 04:17:05.599325 | orchestrator | Sunday 05 April 2026 04:16:58 +0000 (0:00:01.488) 0:00:03.117 ********** 2026-04-05 04:17:05.599329 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:17:05.599333 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:05.599337 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:17:05.599341 | orchestrator | ok: [testbed-manager] 2026-04-05 04:17:05.599344 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:17:05.599348 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:17:05.599352 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:17:05.599356 | orchestrator | 2026-04-05 04:17:05.599360 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 04:17:05.599363 | orchestrator | 2026-04-05 04:17:05.599367 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 04:17:05.599371 | orchestrator | Sunday 05 April 2026 04:17:04 +0000 (0:00:05.753) 0:00:08.871 ********** 2026-04-05 04:17:05.599375 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:17:05.599379 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:05.599383 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:17:05.599386 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:17:05.599390 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:17:05.599394 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:17:05.599397 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:17:05.599401 | orchestrator | 2026-04-05 04:17:05.599405 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:17:05.599409 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:05.599414 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:05.599418 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:05.599422 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:05.599425 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:05.599429 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:05.599433 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:05.599437 | orchestrator | 2026-04-05 04:17:05.599441 | orchestrator | 2026-04-05 04:17:05.599445 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:17:05.599448 | orchestrator | Sunday 05 April 2026 04:17:04 +0000 (0:00:00.669) 0:00:09.540 ********** 2026-04-05 04:17:05.599456 | orchestrator | =============================================================================== 2026-04-05 04:17:05.599460 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.75s 2026-04-05 04:17:05.599464 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.49s 2026-04-05 04:17:05.599467 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.30s 2026-04-05 04:17:05.599471 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2026-04-05 04:17:05.985115 | orchestrator | + osism validate ceph-mons 2026-04-05 04:17:40.681865 | orchestrator | 2026-04-05 04:17:40.682930 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-05 04:17:40.682972 | orchestrator | 2026-04-05 04:17:40.682983 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 04:17:40.683010 | orchestrator | Sunday 05 April 2026 04:17:23 +0000 (0:00:00.460) 0:00:00.460 ********** 2026-04-05 04:17:40.683021 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:17:40.683031 | orchestrator | 2026-04-05 04:17:40.683040 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 04:17:40.683049 | orchestrator | Sunday 05 April 2026 04:17:24 +0000 (0:00:00.955) 0:00:01.415 ********** 2026-04-05 04:17:40.683059 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:17:40.683074 | orchestrator | 2026-04-05 04:17:40.683089 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 04:17:40.683104 | orchestrator | Sunday 05 April 2026 04:17:25 +0000 (0:00:01.109) 0:00:02.525 ********** 2026-04-05 04:17:40.683119 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.683134 | orchestrator | 2026-04-05 04:17:40.683149 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 04:17:40.683162 | orchestrator | Sunday 05 April 2026 04:17:25 +0000 (0:00:00.135) 0:00:02.661 ********** 2026-04-05 04:17:40.683176 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.683190 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:17:40.683205 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:17:40.683219 | orchestrator | 2026-04-05 04:17:40.683234 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 04:17:40.683250 | orchestrator | Sunday 05 April 2026 04:17:26 +0000 (0:00:00.375) 0:00:03.036 ********** 2026-04-05 04:17:40.683265 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:17:40.683281 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.683295 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:17:40.683311 | orchestrator | 2026-04-05 04:17:40.683320 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 04:17:40.683332 | orchestrator | Sunday 05 April 2026 04:17:27 +0000 (0:00:01.170) 0:00:04.207 ********** 2026-04-05 04:17:40.683351 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.683372 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:17:40.683386 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:17:40.683400 | orchestrator | 2026-04-05 04:17:40.683413 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 04:17:40.683427 | orchestrator | Sunday 05 April 2026 04:17:27 +0000 (0:00:00.315) 0:00:04.523 ********** 2026-04-05 04:17:40.683441 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.683454 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:17:40.683470 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:17:40.683484 | orchestrator | 2026-04-05 04:17:40.683498 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 04:17:40.683512 | orchestrator | Sunday 05 April 2026 04:17:28 +0000 (0:00:00.564) 0:00:05.087 ********** 2026-04-05 04:17:40.683526 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.683541 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:17:40.683556 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:17:40.683571 | orchestrator | 2026-04-05 04:17:40.683586 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-05 04:17:40.683630 | orchestrator | Sunday 05 April 2026 04:17:28 +0000 (0:00:00.327) 0:00:05.415 ********** 2026-04-05 04:17:40.683640 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.683649 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:17:40.683658 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:17:40.683667 | orchestrator | 2026-04-05 04:17:40.683675 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-05 04:17:40.683684 | orchestrator | Sunday 05 April 2026 04:17:28 +0000 (0:00:00.314) 0:00:05.729 ********** 2026-04-05 04:17:40.683693 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.683701 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:17:40.683710 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:17:40.683718 | orchestrator | 2026-04-05 04:17:40.683735 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 04:17:40.683744 | orchestrator | Sunday 05 April 2026 04:17:29 +0000 (0:00:00.554) 0:00:06.283 ********** 2026-04-05 04:17:40.683753 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.683761 | orchestrator | 2026-04-05 04:17:40.683770 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 04:17:40.683778 | orchestrator | Sunday 05 April 2026 04:17:29 +0000 (0:00:00.267) 0:00:06.551 ********** 2026-04-05 04:17:40.683787 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.683795 | orchestrator | 2026-04-05 04:17:40.683804 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 04:17:40.683813 | orchestrator | Sunday 05 April 2026 04:17:29 +0000 (0:00:00.284) 0:00:06.836 ********** 2026-04-05 04:17:40.683821 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.683830 | orchestrator | 2026-04-05 04:17:40.683838 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:17:40.683847 | orchestrator | Sunday 05 April 2026 04:17:30 +0000 (0:00:00.303) 0:00:07.139 ********** 2026-04-05 04:17:40.683856 | orchestrator | 2026-04-05 04:17:40.683864 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:17:40.683873 | orchestrator | Sunday 05 April 2026 04:17:30 +0000 (0:00:00.077) 0:00:07.216 ********** 2026-04-05 04:17:40.684025 | orchestrator | 2026-04-05 04:17:40.684036 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:17:40.684045 | orchestrator | Sunday 05 April 2026 04:17:30 +0000 (0:00:00.084) 0:00:07.301 ********** 2026-04-05 04:17:40.684053 | orchestrator | 2026-04-05 04:17:40.684062 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 04:17:40.684071 | orchestrator | Sunday 05 April 2026 04:17:30 +0000 (0:00:00.098) 0:00:07.399 ********** 2026-04-05 04:17:40.684080 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.684088 | orchestrator | 2026-04-05 04:17:40.684097 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 04:17:40.684106 | orchestrator | Sunday 05 April 2026 04:17:30 +0000 (0:00:00.290) 0:00:07.689 ********** 2026-04-05 04:17:40.684120 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.684151 | orchestrator | 2026-04-05 04:17:40.684195 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-05 04:17:40.684211 | orchestrator | Sunday 05 April 2026 04:17:30 +0000 (0:00:00.276) 0:00:07.966 ********** 2026-04-05 04:17:40.684227 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.684242 | orchestrator | 2026-04-05 04:17:40.684257 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-05 04:17:40.684268 | orchestrator | Sunday 05 April 2026 04:17:31 +0000 (0:00:00.132) 0:00:08.098 ********** 2026-04-05 04:17:40.684277 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:17:40.684285 | orchestrator | 2026-04-05 04:17:40.684298 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-05 04:17:40.684363 | orchestrator | Sunday 05 April 2026 04:17:32 +0000 (0:00:01.716) 0:00:09.815 ********** 2026-04-05 04:17:40.684383 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.684396 | orchestrator | 2026-04-05 04:17:40.684422 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-05 04:17:40.684435 | orchestrator | Sunday 05 April 2026 04:17:33 +0000 (0:00:00.590) 0:00:10.405 ********** 2026-04-05 04:17:40.684447 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.684459 | orchestrator | 2026-04-05 04:17:40.684473 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-05 04:17:40.684488 | orchestrator | Sunday 05 April 2026 04:17:33 +0000 (0:00:00.127) 0:00:10.533 ********** 2026-04-05 04:17:40.684502 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.684517 | orchestrator | 2026-04-05 04:17:40.684532 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-05 04:17:40.684547 | orchestrator | Sunday 05 April 2026 04:17:33 +0000 (0:00:00.371) 0:00:10.904 ********** 2026-04-05 04:17:40.684561 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.684577 | orchestrator | 2026-04-05 04:17:40.684591 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-05 04:17:40.684615 | orchestrator | Sunday 05 April 2026 04:17:34 +0000 (0:00:00.337) 0:00:11.241 ********** 2026-04-05 04:17:40.684630 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.684644 | orchestrator | 2026-04-05 04:17:40.684657 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-05 04:17:40.684670 | orchestrator | Sunday 05 April 2026 04:17:34 +0000 (0:00:00.119) 0:00:11.360 ********** 2026-04-05 04:17:40.684683 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.684696 | orchestrator | 2026-04-05 04:17:40.684711 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-05 04:17:40.684725 | orchestrator | Sunday 05 April 2026 04:17:34 +0000 (0:00:00.134) 0:00:11.495 ********** 2026-04-05 04:17:40.684739 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.684754 | orchestrator | 2026-04-05 04:17:40.684768 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-05 04:17:40.684783 | orchestrator | Sunday 05 April 2026 04:17:34 +0000 (0:00:00.133) 0:00:11.629 ********** 2026-04-05 04:17:40.684798 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:17:40.684812 | orchestrator | 2026-04-05 04:17:40.684827 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-05 04:17:40.684836 | orchestrator | Sunday 05 April 2026 04:17:35 +0000 (0:00:01.325) 0:00:12.955 ********** 2026-04-05 04:17:40.684845 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.684856 | orchestrator | 2026-04-05 04:17:40.684870 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-05 04:17:40.684887 | orchestrator | Sunday 05 April 2026 04:17:36 +0000 (0:00:00.349) 0:00:13.304 ********** 2026-04-05 04:17:40.684939 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.684953 | orchestrator | 2026-04-05 04:17:40.684967 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-05 04:17:40.684981 | orchestrator | Sunday 05 April 2026 04:17:36 +0000 (0:00:00.186) 0:00:13.491 ********** 2026-04-05 04:17:40.684995 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:17:40.685010 | orchestrator | 2026-04-05 04:17:40.685025 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-05 04:17:40.685050 | orchestrator | Sunday 05 April 2026 04:17:36 +0000 (0:00:00.190) 0:00:13.682 ********** 2026-04-05 04:17:40.685059 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.685068 | orchestrator | 2026-04-05 04:17:40.685077 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-05 04:17:40.685085 | orchestrator | Sunday 05 April 2026 04:17:36 +0000 (0:00:00.147) 0:00:13.830 ********** 2026-04-05 04:17:40.685094 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.685106 | orchestrator | 2026-04-05 04:17:40.685121 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 04:17:40.685141 | orchestrator | Sunday 05 April 2026 04:17:37 +0000 (0:00:00.394) 0:00:14.224 ********** 2026-04-05 04:17:40.685158 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:17:40.685185 | orchestrator | 2026-04-05 04:17:40.685199 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 04:17:40.685212 | orchestrator | Sunday 05 April 2026 04:17:37 +0000 (0:00:00.280) 0:00:14.505 ********** 2026-04-05 04:17:40.685223 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:17:40.685236 | orchestrator | 2026-04-05 04:17:40.685251 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 04:17:40.685265 | orchestrator | Sunday 05 April 2026 04:17:37 +0000 (0:00:00.284) 0:00:14.790 ********** 2026-04-05 04:17:40.685280 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:17:40.685295 | orchestrator | 2026-04-05 04:17:40.685310 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 04:17:40.685324 | orchestrator | Sunday 05 April 2026 04:17:39 +0000 (0:00:01.982) 0:00:16.772 ********** 2026-04-05 04:17:40.685339 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:17:40.685350 | orchestrator | 2026-04-05 04:17:40.685358 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 04:17:40.685367 | orchestrator | Sunday 05 April 2026 04:17:40 +0000 (0:00:00.330) 0:00:17.102 ********** 2026-04-05 04:17:40.685376 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:17:40.685384 | orchestrator | 2026-04-05 04:17:40.685407 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:17:44.084187 | orchestrator | Sunday 05 April 2026 04:17:40 +0000 (0:00:00.308) 0:00:17.411 ********** 2026-04-05 04:17:44.084342 | orchestrator | 2026-04-05 04:17:44.084377 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:17:44.084403 | orchestrator | Sunday 05 April 2026 04:17:40 +0000 (0:00:00.082) 0:00:17.493 ********** 2026-04-05 04:17:44.084427 | orchestrator | 2026-04-05 04:17:44.084442 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:17:44.084457 | orchestrator | Sunday 05 April 2026 04:17:40 +0000 (0:00:00.076) 0:00:17.570 ********** 2026-04-05 04:17:44.084471 | orchestrator | 2026-04-05 04:17:44.084484 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 04:17:44.084497 | orchestrator | Sunday 05 April 2026 04:17:40 +0000 (0:00:00.081) 0:00:17.651 ********** 2026-04-05 04:17:44.084511 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:17:44.084524 | orchestrator | 2026-04-05 04:17:44.084537 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 04:17:44.084550 | orchestrator | Sunday 05 April 2026 04:17:42 +0000 (0:00:01.853) 0:00:19.505 ********** 2026-04-05 04:17:44.084563 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 04:17:44.084576 | orchestrator |  "msg": [ 2026-04-05 04:17:44.084591 | orchestrator |  "Validator run completed.", 2026-04-05 04:17:44.084615 | orchestrator |  "You can find the report file here:", 2026-04-05 04:17:44.084630 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-05T04:17:24+00:00-report.json", 2026-04-05 04:17:44.084647 | orchestrator |  "on the following host:", 2026-04-05 04:17:44.084660 | orchestrator |  "testbed-manager" 2026-04-05 04:17:44.084675 | orchestrator |  ] 2026-04-05 04:17:44.084689 | orchestrator | } 2026-04-05 04:17:44.084703 | orchestrator | 2026-04-05 04:17:44.084717 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:17:44.084733 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 04:17:44.084749 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:44.084763 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:17:44.084777 | orchestrator | 2026-04-05 04:17:44.084823 | orchestrator | 2026-04-05 04:17:44.084836 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:17:44.084850 | orchestrator | Sunday 05 April 2026 04:17:43 +0000 (0:00:01.102) 0:00:20.607 ********** 2026-04-05 04:17:44.084864 | orchestrator | =============================================================================== 2026-04-05 04:17:44.084878 | orchestrator | Aggregate test results step one ----------------------------------------- 1.98s 2026-04-05 04:17:44.084892 | orchestrator | Write report file ------------------------------------------------------- 1.85s 2026-04-05 04:17:44.084939 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.72s 2026-04-05 04:17:44.084953 | orchestrator | Gather status data ------------------------------------------------------ 1.33s 2026-04-05 04:17:44.084966 | orchestrator | Get container info ------------------------------------------------------ 1.17s 2026-04-05 04:17:44.084980 | orchestrator | Create report output directory ------------------------------------------ 1.11s 2026-04-05 04:17:44.084994 | orchestrator | Print report file information ------------------------------------------- 1.10s 2026-04-05 04:17:44.085008 | orchestrator | Get timestamp for report file ------------------------------------------- 0.96s 2026-04-05 04:17:44.085022 | orchestrator | Set quorum test data ---------------------------------------------------- 0.59s 2026-04-05 04:17:44.085038 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-04-05 04:17:44.085068 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.55s 2026-04-05 04:17:44.085092 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.39s 2026-04-05 04:17:44.085106 | orchestrator | Prepare test data for container existance test -------------------------- 0.38s 2026-04-05 04:17:44.085117 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.37s 2026-04-05 04:17:44.085130 | orchestrator | Set health test data ---------------------------------------------------- 0.35s 2026-04-05 04:17:44.085142 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2026-04-05 04:17:44.085156 | orchestrator | Aggregate test results step two ----------------------------------------- 0.33s 2026-04-05 04:17:44.085169 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-04-05 04:17:44.085182 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-04-05 04:17:44.085195 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2026-04-05 04:17:44.521271 | orchestrator | + osism validate ceph-mgrs 2026-04-05 04:18:18.374103 | orchestrator | 2026-04-05 04:18:18.374254 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-05 04:18:18.374281 | orchestrator | 2026-04-05 04:18:18.374292 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 04:18:18.374304 | orchestrator | Sunday 05 April 2026 04:18:02 +0000 (0:00:00.522) 0:00:00.522 ********** 2026-04-05 04:18:18.374315 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:18.374326 | orchestrator | 2026-04-05 04:18:18.374336 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 04:18:18.374346 | orchestrator | Sunday 05 April 2026 04:18:03 +0000 (0:00:00.861) 0:00:01.383 ********** 2026-04-05 04:18:18.374377 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:18.374384 | orchestrator | 2026-04-05 04:18:18.374390 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 04:18:18.374396 | orchestrator | Sunday 05 April 2026 04:18:04 +0000 (0:00:01.200) 0:00:02.584 ********** 2026-04-05 04:18:18.374403 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.374411 | orchestrator | 2026-04-05 04:18:18.374417 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 04:18:18.374423 | orchestrator | Sunday 05 April 2026 04:18:04 +0000 (0:00:00.142) 0:00:02.726 ********** 2026-04-05 04:18:18.374429 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.374435 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:18:18.374464 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:18:18.374470 | orchestrator | 2026-04-05 04:18:18.374476 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 04:18:18.374482 | orchestrator | Sunday 05 April 2026 04:18:04 +0000 (0:00:00.329) 0:00:03.056 ********** 2026-04-05 04:18:18.374488 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:18:18.374494 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.374499 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:18:18.374505 | orchestrator | 2026-04-05 04:18:18.374511 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 04:18:18.374516 | orchestrator | Sunday 05 April 2026 04:18:05 +0000 (0:00:01.132) 0:00:04.188 ********** 2026-04-05 04:18:18.374522 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.374528 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:18:18.374535 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:18:18.374542 | orchestrator | 2026-04-05 04:18:18.374548 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 04:18:18.374555 | orchestrator | Sunday 05 April 2026 04:18:06 +0000 (0:00:00.384) 0:00:04.573 ********** 2026-04-05 04:18:18.374562 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.374569 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:18:18.374576 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:18:18.374582 | orchestrator | 2026-04-05 04:18:18.374589 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 04:18:18.374596 | orchestrator | Sunday 05 April 2026 04:18:06 +0000 (0:00:00.559) 0:00:05.133 ********** 2026-04-05 04:18:18.374602 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.374609 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:18:18.374615 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:18:18.374622 | orchestrator | 2026-04-05 04:18:18.374628 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-05 04:18:18.374635 | orchestrator | Sunday 05 April 2026 04:18:07 +0000 (0:00:00.379) 0:00:05.513 ********** 2026-04-05 04:18:18.374641 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.374648 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:18:18.374654 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:18:18.374661 | orchestrator | 2026-04-05 04:18:18.374667 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-05 04:18:18.374674 | orchestrator | Sunday 05 April 2026 04:18:07 +0000 (0:00:00.330) 0:00:05.843 ********** 2026-04-05 04:18:18.374680 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.374687 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:18:18.374693 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:18:18.374700 | orchestrator | 2026-04-05 04:18:18.374706 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 04:18:18.374713 | orchestrator | Sunday 05 April 2026 04:18:08 +0000 (0:00:00.576) 0:00:06.420 ********** 2026-04-05 04:18:18.374720 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.374726 | orchestrator | 2026-04-05 04:18:18.374733 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 04:18:18.374740 | orchestrator | Sunday 05 April 2026 04:18:08 +0000 (0:00:00.285) 0:00:06.705 ********** 2026-04-05 04:18:18.374747 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.374753 | orchestrator | 2026-04-05 04:18:18.374760 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 04:18:18.374770 | orchestrator | Sunday 05 April 2026 04:18:08 +0000 (0:00:00.288) 0:00:06.994 ********** 2026-04-05 04:18:18.374777 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.374783 | orchestrator | 2026-04-05 04:18:18.374790 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:18.374797 | orchestrator | Sunday 05 April 2026 04:18:09 +0000 (0:00:00.334) 0:00:07.329 ********** 2026-04-05 04:18:18.374803 | orchestrator | 2026-04-05 04:18:18.374810 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:18.374816 | orchestrator | Sunday 05 April 2026 04:18:09 +0000 (0:00:00.080) 0:00:07.409 ********** 2026-04-05 04:18:18.374828 | orchestrator | 2026-04-05 04:18:18.374835 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:18.374841 | orchestrator | Sunday 05 April 2026 04:18:09 +0000 (0:00:00.073) 0:00:07.482 ********** 2026-04-05 04:18:18.374848 | orchestrator | 2026-04-05 04:18:18.374855 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 04:18:18.374862 | orchestrator | Sunday 05 April 2026 04:18:09 +0000 (0:00:00.106) 0:00:07.588 ********** 2026-04-05 04:18:18.374868 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.374875 | orchestrator | 2026-04-05 04:18:18.374882 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 04:18:18.374888 | orchestrator | Sunday 05 April 2026 04:18:09 +0000 (0:00:00.285) 0:00:07.874 ********** 2026-04-05 04:18:18.374950 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.374959 | orchestrator | 2026-04-05 04:18:18.374982 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-05 04:18:18.374989 | orchestrator | Sunday 05 April 2026 04:18:09 +0000 (0:00:00.315) 0:00:08.189 ********** 2026-04-05 04:18:18.374995 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.375001 | orchestrator | 2026-04-05 04:18:18.375007 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-05 04:18:18.375012 | orchestrator | Sunday 05 April 2026 04:18:10 +0000 (0:00:00.133) 0:00:08.322 ********** 2026-04-05 04:18:18.375019 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:18:18.375024 | orchestrator | 2026-04-05 04:18:18.375030 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-05 04:18:18.375039 | orchestrator | Sunday 05 April 2026 04:18:12 +0000 (0:00:02.026) 0:00:10.348 ********** 2026-04-05 04:18:18.375049 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.375058 | orchestrator | 2026-04-05 04:18:18.375067 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-05 04:18:18.375077 | orchestrator | Sunday 05 April 2026 04:18:12 +0000 (0:00:00.511) 0:00:10.860 ********** 2026-04-05 04:18:18.375087 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.375096 | orchestrator | 2026-04-05 04:18:18.375107 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-05 04:18:18.375113 | orchestrator | Sunday 05 April 2026 04:18:12 +0000 (0:00:00.366) 0:00:11.227 ********** 2026-04-05 04:18:18.375118 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.375124 | orchestrator | 2026-04-05 04:18:18.375130 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-05 04:18:18.375135 | orchestrator | Sunday 05 April 2026 04:18:13 +0000 (0:00:00.162) 0:00:11.390 ********** 2026-04-05 04:18:18.375141 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:18:18.375147 | orchestrator | 2026-04-05 04:18:18.375152 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 04:18:18.375158 | orchestrator | Sunday 05 April 2026 04:18:13 +0000 (0:00:00.161) 0:00:11.551 ********** 2026-04-05 04:18:18.375164 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:18.375169 | orchestrator | 2026-04-05 04:18:18.375175 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 04:18:18.375181 | orchestrator | Sunday 05 April 2026 04:18:13 +0000 (0:00:00.265) 0:00:11.816 ********** 2026-04-05 04:18:18.375187 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:18:18.375192 | orchestrator | 2026-04-05 04:18:18.375198 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 04:18:18.375203 | orchestrator | Sunday 05 April 2026 04:18:13 +0000 (0:00:00.279) 0:00:12.096 ********** 2026-04-05 04:18:18.375209 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:18.375215 | orchestrator | 2026-04-05 04:18:18.375220 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 04:18:18.375226 | orchestrator | Sunday 05 April 2026 04:18:15 +0000 (0:00:01.426) 0:00:13.522 ********** 2026-04-05 04:18:18.375232 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:18.375244 | orchestrator | 2026-04-05 04:18:18.375250 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 04:18:18.375255 | orchestrator | Sunday 05 April 2026 04:18:15 +0000 (0:00:00.299) 0:00:13.822 ********** 2026-04-05 04:18:18.375261 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:18.375267 | orchestrator | 2026-04-05 04:18:18.375272 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:18.375278 | orchestrator | Sunday 05 April 2026 04:18:15 +0000 (0:00:00.274) 0:00:14.097 ********** 2026-04-05 04:18:18.375284 | orchestrator | 2026-04-05 04:18:18.375289 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:18.375295 | orchestrator | Sunday 05 April 2026 04:18:15 +0000 (0:00:00.101) 0:00:14.198 ********** 2026-04-05 04:18:18.375301 | orchestrator | 2026-04-05 04:18:18.375306 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:18.375312 | orchestrator | Sunday 05 April 2026 04:18:16 +0000 (0:00:00.087) 0:00:14.285 ********** 2026-04-05 04:18:18.375318 | orchestrator | 2026-04-05 04:18:18.375323 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 04:18:18.375329 | orchestrator | Sunday 05 April 2026 04:18:16 +0000 (0:00:00.338) 0:00:14.624 ********** 2026-04-05 04:18:18.375335 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:18.375340 | orchestrator | 2026-04-05 04:18:18.375351 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 04:18:18.375357 | orchestrator | Sunday 05 April 2026 04:18:17 +0000 (0:00:01.469) 0:00:16.093 ********** 2026-04-05 04:18:18.375362 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 04:18:18.375368 | orchestrator |  "msg": [ 2026-04-05 04:18:18.375375 | orchestrator |  "Validator run completed.", 2026-04-05 04:18:18.375381 | orchestrator |  "You can find the report file here:", 2026-04-05 04:18:18.375387 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-05T04:18:03+00:00-report.json", 2026-04-05 04:18:18.375394 | orchestrator |  "on the following host:", 2026-04-05 04:18:18.375400 | orchestrator |  "testbed-manager" 2026-04-05 04:18:18.375406 | orchestrator |  ] 2026-04-05 04:18:18.375413 | orchestrator | } 2026-04-05 04:18:18.375419 | orchestrator | 2026-04-05 04:18:18.375425 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:18:18.375432 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 04:18:18.375440 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:18:18.375452 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:18:18.945811 | orchestrator | 2026-04-05 04:18:18.946078 | orchestrator | 2026-04-05 04:18:18.946105 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:18:18.946131 | orchestrator | Sunday 05 April 2026 04:18:18 +0000 (0:00:00.508) 0:00:16.602 ********** 2026-04-05 04:18:18.946152 | orchestrator | =============================================================================== 2026-04-05 04:18:18.946166 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.03s 2026-04-05 04:18:18.946184 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2026-04-05 04:18:18.946201 | orchestrator | Aggregate test results step one ----------------------------------------- 1.43s 2026-04-05 04:18:18.946218 | orchestrator | Create report output directory ------------------------------------------ 1.20s 2026-04-05 04:18:18.946235 | orchestrator | Get container info ------------------------------------------------------ 1.13s 2026-04-05 04:18:18.946250 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-04-05 04:18:18.946306 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.58s 2026-04-05 04:18:18.946325 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-04-05 04:18:18.946344 | orchestrator | Flush handlers ---------------------------------------------------------- 0.53s 2026-04-05 04:18:18.946362 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.51s 2026-04-05 04:18:18.946379 | orchestrator | Print report file information ------------------------------------------- 0.51s 2026-04-05 04:18:18.946397 | orchestrator | Set test result to failed if container is missing ----------------------- 0.38s 2026-04-05 04:18:18.946414 | orchestrator | Prepare test data ------------------------------------------------------- 0.38s 2026-04-05 04:18:18.946432 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.37s 2026-04-05 04:18:18.946448 | orchestrator | Aggregate test results step three --------------------------------------- 0.33s 2026-04-05 04:18:18.946466 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2026-04-05 04:18:18.946483 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-04-05 04:18:18.946501 | orchestrator | Fail due to missing containers ------------------------------------------ 0.32s 2026-04-05 04:18:18.946521 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-04-05 04:18:18.946541 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-05 04:18:19.386812 | orchestrator | + osism validate ceph-osds 2026-04-05 04:18:42.255631 | orchestrator | 2026-04-05 04:18:42.255750 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-05 04:18:42.255767 | orchestrator | 2026-04-05 04:18:42.255779 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 04:18:42.255791 | orchestrator | Sunday 05 April 2026 04:18:37 +0000 (0:00:00.458) 0:00:00.458 ********** 2026-04-05 04:18:42.255808 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:42.255832 | orchestrator | 2026-04-05 04:18:42.255859 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 04:18:42.255878 | orchestrator | Sunday 05 April 2026 04:18:38 +0000 (0:00:01.010) 0:00:01.469 ********** 2026-04-05 04:18:42.255928 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:42.255947 | orchestrator | 2026-04-05 04:18:42.255964 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 04:18:42.255982 | orchestrator | Sunday 05 April 2026 04:18:38 +0000 (0:00:00.599) 0:00:02.069 ********** 2026-04-05 04:18:42.256000 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:18:42.256018 | orchestrator | 2026-04-05 04:18:42.256037 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 04:18:42.256055 | orchestrator | Sunday 05 April 2026 04:18:39 +0000 (0:00:00.772) 0:00:02.841 ********** 2026-04-05 04:18:42.256074 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:42.256094 | orchestrator | 2026-04-05 04:18:42.256114 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 04:18:42.256133 | orchestrator | Sunday 05 April 2026 04:18:39 +0000 (0:00:00.134) 0:00:02.976 ********** 2026-04-05 04:18:42.256152 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:42.256174 | orchestrator | 2026-04-05 04:18:42.256194 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 04:18:42.256214 | orchestrator | Sunday 05 April 2026 04:18:39 +0000 (0:00:00.165) 0:00:03.142 ********** 2026-04-05 04:18:42.256235 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:42.256257 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:18:42.256276 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:18:42.256292 | orchestrator | 2026-04-05 04:18:42.256306 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 04:18:42.256319 | orchestrator | Sunday 05 April 2026 04:18:40 +0000 (0:00:00.372) 0:00:03.514 ********** 2026-04-05 04:18:42.256359 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:42.256372 | orchestrator | 2026-04-05 04:18:42.256387 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 04:18:42.256399 | orchestrator | Sunday 05 April 2026 04:18:40 +0000 (0:00:00.193) 0:00:03.707 ********** 2026-04-05 04:18:42.256411 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:42.256424 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:42.256436 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:42.256448 | orchestrator | 2026-04-05 04:18:42.256461 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-05 04:18:42.256474 | orchestrator | Sunday 05 April 2026 04:18:40 +0000 (0:00:00.359) 0:00:04.067 ********** 2026-04-05 04:18:42.256486 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:42.256498 | orchestrator | 2026-04-05 04:18:42.256512 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 04:18:42.256522 | orchestrator | Sunday 05 April 2026 04:18:41 +0000 (0:00:00.865) 0:00:04.933 ********** 2026-04-05 04:18:42.256533 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:42.256544 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:42.256554 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:42.256565 | orchestrator | 2026-04-05 04:18:42.256576 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-05 04:18:42.256586 | orchestrator | Sunday 05 April 2026 04:18:41 +0000 (0:00:00.323) 0:00:05.256 ********** 2026-04-05 04:18:42.256600 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6faa108109b2b6e543958062be56aa572c7a351d0d250bbc44d5f756f993b2f1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-05 04:18:42.256615 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cbb02f62e05a2d59e5ec2c7ff432eed07441dc00828a9fa5f1f4fefcd0280c4e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-05 04:18:42.256628 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f295c9751866eb29645f17e5c5b7b945813294a54ad91d7136732b5c59881115', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-05 04:18:42.256639 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0548b34576f65f28e089fe5230245526f79ed405bd4125ed0e46a745c18a5134', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-04-05 04:18:42.256650 | orchestrator | skipping: [testbed-node-3] => (item={'id': '71475dc96de8a0891a49ed7e74b06ad7c4b32665bf5b9ba6529fa5967fc63886', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-05 04:18:42.256736 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eb1be6fa74a74f8d554bf882a1208c1a3e09d3100bbe66c2c092532dc3c02876', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-05 04:18:42.256751 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8ef72fb7501addc6c107fab6206837c289ec6e6e300c0b82799c98275331218f', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-05 04:18:42.256762 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fd2ac1f1269a99780441c090fcecdc1958f752b3379856b57a8307562cc52461', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 51 minutes (healthy)'})  2026-04-05 04:18:42.256782 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3532e5dbe81bd5da389e5d9a36219f48007f47eb16f56a163db6b4f5c44901bb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.256800 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7cd171b7fdb89d5f774b146890b341539a25356477449196cbc12c8bc0c40aef', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.256811 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2b22ed79f7d1665fd883b896a456c28a0818d72b0e70cd2a6c5edce1abbf7ce4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.256824 | orchestrator | ok: [testbed-node-3] => (item={'id': 'df59532992bd052641662ce210c22182d9709adca586947c24d6feaf323eee1f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-05 04:18:42.256836 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f0887b9872273af6b497b15443cd4dfd46747db1ebdc469ea3ea81bef91f022d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-05 04:18:42.256847 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b895d3ed5107750622c8f407692b68cfa610d5b34d27967cb4e4bcd7a734cb90', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.256859 | orchestrator | skipping: [testbed-node-3] => (item={'id': '30c214f942760ff438815bed25b4e7ab6842004eb787eeb1239a5d89728f671a', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 04:18:42.256870 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'caf1458bb56b0f20c48d3897a1ec7e39d7003435f9cfef0aa5d2b5884de17346', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 04:18:42.256881 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d63e3efdaa369ce0f3e6efece22190ad0efbea6fad6aeb8512e1573bb0b8234', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:42.256920 | orchestrator | skipping: [testbed-node-3] => (item={'id': '388d954eeb25f254893978d10801b5241a012e68212e2033563a3f7d00b2d287', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:42.256933 | orchestrator | skipping: [testbed-node-3] => (item={'id': '093c7b247e9a113e57de62a45bd6a1d2e11a05f7891d8ae5ea9cc956acdf67c7', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:42.256944 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0f57da409103c908a6d493b3235ac63e71d697244e29119f4169edf225cc0bfc', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-05 04:18:42.256965 | orchestrator | skipping: [testbed-node-4] => (item={'id': '79e963258ce39b4638099499182d753eeeed97608b708a238b5ab54a2d86bbb4', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-05 04:18:42.614208 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c84c12241d6022a901e0a646c7d2f7eaef61c21d52f4376b79d7ce03d36476cf', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-05 04:18:42.614295 | orchestrator | skipping: [testbed-node-4] => (item={'id': '04adb6d52e0e65f056579028148037e82977f53412ea2857988cb52c40ce118d', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-04-05 04:18:42.614302 | orchestrator | skipping: [testbed-node-4] => (item={'id': '83c497402c7291fd2aa1ded1597b85904c0d35d9eb6efc24d9e6af379f4882f9', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-05 04:18:42.614320 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9c9505568f890ea1c688f67d65ad414dd46622c59d08839e3c84edcc6b3e312f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-05 04:18:42.614325 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6a85815d2936c15f3ad2b0b4d5f5fd69d196c9e7f823b264a670990b4a6715e3', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-05 04:18:42.614330 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ee498f4ccb45b45e18ec22679f551c7d4feba7c19bf50675f31aa4059c8d3718', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 51 minutes (healthy)'})  2026-04-05 04:18:42.614334 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3675b73a5225ffcc34c5615163a298d5ffc749b7705ba8c5c8baddb4f9b12431', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.614340 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c537baad58252ff8b1b318a39cc11061e0d8540646363536d7e08020508ee0b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.614345 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f2cbb97b816aae40e3e70d6d6afaa9a61c9cae80ac3e3256bcb276afc8285415', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.614352 | orchestrator | ok: [testbed-node-4] => (item={'id': '1d62be940947be9bea9c7835698a037b4c563c5b214609ff376bfc1cf49d3b94', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-05 04:18:42.614357 | orchestrator | ok: [testbed-node-4] => (item={'id': '8e8206b4886dbaeb1dda79096305773205217047e63a499c7fc5f16bb283838d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-05 04:18:42.614362 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'db2b6333e0f09d15b005bc1b335f3839b57cefb3dbcde47bba9fcf529e69c244', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.614366 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e9e9341839247ee033a34398d30468bf54e2291987d884c437622e598e679d81', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 04:18:42.614371 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eb9a6823eccc3c0c14678e1d128fb009090abbf0846d31fd8f4151f01b682bc9', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 04:18:42.614389 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33ccbc24a3a14d7daec31a41da547d9e81730ae6d397d0ee30075ff2fcfa1e4e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:42.614395 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ce066dfc630b921619da8463ac4c391cac0d9d41f34a38d0392dd206ca16da97', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:42.614399 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4de5211a0c3711bba87f844a599b87977fbb12438588de18f4e8d439d9a900ee', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:42.614404 | orchestrator | skipping: [testbed-node-5] => (item={'id': '76a4a3680a2a736e7bc372ec9c27bc9cd1aa876a4d77ba577f29d187b8dbb591', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-05 04:18:42.614412 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c71532be3f8594900bd6c79c215e1fad4add71c892c4f7ffc0fa7377e9c208c1', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-05 04:18:42.614417 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b67575f5d78f4e07f6a65f931bcb29562d6c48867e1692a12b3a28e933e0196', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-04-05 04:18:42.614422 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1e665ac066b548e7086daf0ed92b0020cd0227d5d83ecc7302b9ddb8c6dcc7d8', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 21 minutes (unhealthy)'})  2026-04-05 04:18:42.614426 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4dfa053b217372e378584485c77165b82f5cde6920f08bd52620741d43f9ceb6', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-05 04:18:42.614431 | orchestrator | skipping: [testbed-node-5] => (item={'id': '63b2eb4aa804a0e2d1bb3a4d8e480c0383df1425fc914df0ca88e3090ad0ac1f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-05 04:18:42.614436 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cfff164d6b02e968836940aa8ab5e8f168d6a1477d6a6ed964c46cf5acbd59e9', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 44 minutes (healthy)'})  2026-04-05 04:18:42.614440 | orchestrator | skipping: [testbed-node-5] => (item={'id': '75a3975ad611397786af397f35c24c75edea1133f027760380614a0284d91c92', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 51 minutes (healthy)'})  2026-04-05 04:18:42.614445 | orchestrator | skipping: [testbed-node-5] => (item={'id': '58ad392c89360bf9c4e528bcb4dc5143bc7df5e4233cf02c577229c53f84b6bb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.614450 | orchestrator | skipping: [testbed-node-5] => (item={'id': '014777a70a71e45b893b8b3d60640cec5d9c9a0255080ea1d8942640c0360649', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.614455 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8fca01ee15921ca14fb7c8b77f491095765ca229c78a364116866b543233867b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:42.614463 | orchestrator | ok: [testbed-node-5] => (item={'id': 'bc502b29e226e2444d3f9f6055b92393c3274c66fe55971831bd3451ed4141c3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-05 04:18:42.614482 | orchestrator | ok: [testbed-node-5] => (item={'id': '54c8a05d9e09f9cc10f330b8be82cbd7d9ec2dcef55dc93df54bd46e3149a5f5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-05 04:18:55.042663 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c54120cf9c986a818195235fbc8ad334d5b238471c02040751536fc7d7dd76bc', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 04:18:55.042771 | orchestrator | skipping: [testbed-node-5] => (item={'id': '37a4704410dae8ca8a3c1d9029f90d3d62536663ff6ab7025c090731628e3b49', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 04:18:55.042788 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e4d7d8016a551efc2afdb3fa788f380b4929b1cd10bc109c51ae2098c97cadcb', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 04:18:55.042800 | orchestrator | skipping: [testbed-node-5] => (item={'id': '359c5b7283c1843203bf1615e8efdedc36ea4227115bd4f8210d7adaeff91f50', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:55.042812 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2fb4321c8ba33f3b41b476bb914be1216677f74ced5ecda32ea3e59848d9893f', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:55.042824 | orchestrator | skipping: [testbed-node-5] => (item={'id': '49b7e5f5be15fd8b5d11914c25655a5c9b32fa3d7a69b62dc07d98ac6511e1d5', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 04:18:55.042835 | orchestrator | 2026-04-05 04:18:55.042848 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-05 04:18:55.042859 | orchestrator | Sunday 05 April 2026 04:18:42 +0000 (0:00:00.649) 0:00:05.906 ********** 2026-04-05 04:18:55.042886 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.042971 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:55.042981 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:55.042990 | orchestrator | 2026-04-05 04:18:55.042999 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-05 04:18:55.043009 | orchestrator | Sunday 05 April 2026 04:18:42 +0000 (0:00:00.335) 0:00:06.242 ********** 2026-04-05 04:18:55.043019 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043029 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:18:55.043039 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:18:55.043048 | orchestrator | 2026-04-05 04:18:55.043058 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-05 04:18:55.043068 | orchestrator | Sunday 05 April 2026 04:18:43 +0000 (0:00:00.560) 0:00:06.802 ********** 2026-04-05 04:18:55.043078 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.043089 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:55.043099 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:55.043108 | orchestrator | 2026-04-05 04:18:55.043118 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 04:18:55.043155 | orchestrator | Sunday 05 April 2026 04:18:43 +0000 (0:00:00.382) 0:00:07.184 ********** 2026-04-05 04:18:55.043166 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.043177 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:55.043186 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:55.043196 | orchestrator | 2026-04-05 04:18:55.043206 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-05 04:18:55.043216 | orchestrator | Sunday 05 April 2026 04:18:44 +0000 (0:00:00.351) 0:00:07.535 ********** 2026-04-05 04:18:55.043245 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-05 04:18:55.043258 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-05 04:18:55.043268 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043278 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-05 04:18:55.043288 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-05 04:18:55.043299 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:18:55.043311 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-05 04:18:55.043321 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-05 04:18:55.043331 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:18:55.043341 | orchestrator | 2026-04-05 04:18:55.043352 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-05 04:18:55.043363 | orchestrator | Sunday 05 April 2026 04:18:44 +0000 (0:00:00.367) 0:00:07.903 ********** 2026-04-05 04:18:55.043374 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.043384 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:55.043395 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:55.043405 | orchestrator | 2026-04-05 04:18:55.043414 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 04:18:55.043425 | orchestrator | Sunday 05 April 2026 04:18:45 +0000 (0:00:00.600) 0:00:08.504 ********** 2026-04-05 04:18:55.043434 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043465 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:18:55.043478 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:18:55.043488 | orchestrator | 2026-04-05 04:18:55.043498 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 04:18:55.043508 | orchestrator | Sunday 05 April 2026 04:18:45 +0000 (0:00:00.343) 0:00:08.847 ********** 2026-04-05 04:18:55.043519 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043529 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:18:55.043539 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:18:55.043550 | orchestrator | 2026-04-05 04:18:55.043560 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-05 04:18:55.043569 | orchestrator | Sunday 05 April 2026 04:18:45 +0000 (0:00:00.371) 0:00:09.218 ********** 2026-04-05 04:18:55.043578 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.043587 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:55.043596 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:55.043604 | orchestrator | 2026-04-05 04:18:55.043629 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 04:18:55.043639 | orchestrator | Sunday 05 April 2026 04:18:46 +0000 (0:00:00.661) 0:00:09.879 ********** 2026-04-05 04:18:55.043660 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043670 | orchestrator | 2026-04-05 04:18:55.043681 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 04:18:55.043700 | orchestrator | Sunday 05 April 2026 04:18:46 +0000 (0:00:00.266) 0:00:10.146 ********** 2026-04-05 04:18:55.043711 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043721 | orchestrator | 2026-04-05 04:18:55.043732 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 04:18:55.043755 | orchestrator | Sunday 05 April 2026 04:18:47 +0000 (0:00:00.258) 0:00:10.405 ********** 2026-04-05 04:18:55.043767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043778 | orchestrator | 2026-04-05 04:18:55.043789 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:55.043799 | orchestrator | Sunday 05 April 2026 04:18:47 +0000 (0:00:00.281) 0:00:10.687 ********** 2026-04-05 04:18:55.043809 | orchestrator | 2026-04-05 04:18:55.043819 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:55.043830 | orchestrator | Sunday 05 April 2026 04:18:47 +0000 (0:00:00.074) 0:00:10.762 ********** 2026-04-05 04:18:55.043839 | orchestrator | 2026-04-05 04:18:55.043850 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:18:55.043860 | orchestrator | Sunday 05 April 2026 04:18:47 +0000 (0:00:00.077) 0:00:10.839 ********** 2026-04-05 04:18:55.043871 | orchestrator | 2026-04-05 04:18:55.043881 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 04:18:55.043909 | orchestrator | Sunday 05 April 2026 04:18:47 +0000 (0:00:00.074) 0:00:10.913 ********** 2026-04-05 04:18:55.043919 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043929 | orchestrator | 2026-04-05 04:18:55.043939 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-05 04:18:55.043949 | orchestrator | Sunday 05 April 2026 04:18:47 +0000 (0:00:00.297) 0:00:11.210 ********** 2026-04-05 04:18:55.043959 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.043969 | orchestrator | 2026-04-05 04:18:55.043979 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 04:18:55.043988 | orchestrator | Sunday 05 April 2026 04:18:48 +0000 (0:00:00.247) 0:00:11.458 ********** 2026-04-05 04:18:55.043998 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.044007 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:55.044017 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:55.044027 | orchestrator | 2026-04-05 04:18:55.044037 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-05 04:18:55.044047 | orchestrator | Sunday 05 April 2026 04:18:48 +0000 (0:00:00.342) 0:00:11.801 ********** 2026-04-05 04:18:55.044056 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.044065 | orchestrator | 2026-04-05 04:18:55.044075 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-05 04:18:55.044085 | orchestrator | Sunday 05 April 2026 04:18:49 +0000 (0:00:00.897) 0:00:12.698 ********** 2026-04-05 04:18:55.044095 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 04:18:55.044104 | orchestrator | 2026-04-05 04:18:55.044114 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-05 04:18:55.044124 | orchestrator | Sunday 05 April 2026 04:18:51 +0000 (0:00:01.787) 0:00:14.485 ********** 2026-04-05 04:18:55.044133 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.044143 | orchestrator | 2026-04-05 04:18:55.044152 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-05 04:18:55.044162 | orchestrator | Sunday 05 April 2026 04:18:51 +0000 (0:00:00.152) 0:00:14.638 ********** 2026-04-05 04:18:55.044172 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.044182 | orchestrator | 2026-04-05 04:18:55.044191 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-05 04:18:55.044201 | orchestrator | Sunday 05 April 2026 04:18:51 +0000 (0:00:00.379) 0:00:15.017 ********** 2026-04-05 04:18:55.044211 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:18:55.044220 | orchestrator | 2026-04-05 04:18:55.044230 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-05 04:18:55.044240 | orchestrator | Sunday 05 April 2026 04:18:51 +0000 (0:00:00.137) 0:00:15.155 ********** 2026-04-05 04:18:55.044249 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.044259 | orchestrator | 2026-04-05 04:18:55.044269 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 04:18:55.044278 | orchestrator | Sunday 05 April 2026 04:18:51 +0000 (0:00:00.139) 0:00:15.294 ********** 2026-04-05 04:18:55.044296 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:18:55.044307 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:18:55.044316 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:18:55.044326 | orchestrator | 2026-04-05 04:18:55.044335 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-05 04:18:55.044345 | orchestrator | Sunday 05 April 2026 04:18:52 +0000 (0:00:00.339) 0:00:15.634 ********** 2026-04-05 04:18:55.044355 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:18:55.044365 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:18:55.044375 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:19:06.745751 | orchestrator | 2026-04-05 04:19:06.745861 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-05 04:19:06.745877 | orchestrator | Sunday 05 April 2026 04:18:55 +0000 (0:00:02.703) 0:00:18.337 ********** 2026-04-05 04:19:06.745923 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:19:06.745937 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:19:06.745958 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:19:06.745969 | orchestrator | 2026-04-05 04:19:06.745981 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-05 04:19:06.745992 | orchestrator | Sunday 05 April 2026 04:18:55 +0000 (0:00:00.403) 0:00:18.741 ********** 2026-04-05 04:19:06.746003 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:19:06.746069 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:19:06.746084 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:19:06.746096 | orchestrator | 2026-04-05 04:19:06.746107 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-05 04:19:06.746119 | orchestrator | Sunday 05 April 2026 04:18:55 +0000 (0:00:00.545) 0:00:19.286 ********** 2026-04-05 04:19:06.746130 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:19:06.746142 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:19:06.746153 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:19:06.746164 | orchestrator | 2026-04-05 04:19:06.746176 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-05 04:19:06.746204 | orchestrator | Sunday 05 April 2026 04:18:56 +0000 (0:00:00.349) 0:00:19.636 ********** 2026-04-05 04:19:06.746215 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:19:06.746227 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:19:06.746238 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:19:06.746249 | orchestrator | 2026-04-05 04:19:06.746260 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-05 04:19:06.746271 | orchestrator | Sunday 05 April 2026 04:18:56 +0000 (0:00:00.615) 0:00:20.252 ********** 2026-04-05 04:19:06.746283 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:19:06.746297 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:19:06.746309 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:19:06.746323 | orchestrator | 2026-04-05 04:19:06.746336 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-05 04:19:06.746350 | orchestrator | Sunday 05 April 2026 04:18:57 +0000 (0:00:00.327) 0:00:20.579 ********** 2026-04-05 04:19:06.746364 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:19:06.746377 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:19:06.746390 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:19:06.746403 | orchestrator | 2026-04-05 04:19:06.746416 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 04:19:06.746431 | orchestrator | Sunday 05 April 2026 04:18:57 +0000 (0:00:00.329) 0:00:20.908 ********** 2026-04-05 04:19:06.746450 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:19:06.746471 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:19:06.746493 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:19:06.746513 | orchestrator | 2026-04-05 04:19:06.746532 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-05 04:19:06.746552 | orchestrator | Sunday 05 April 2026 04:18:58 +0000 (0:00:00.559) 0:00:21.468 ********** 2026-04-05 04:19:06.746573 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:19:06.746625 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:19:06.746647 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:19:06.746668 | orchestrator | 2026-04-05 04:19:06.746688 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-05 04:19:06.746708 | orchestrator | Sunday 05 April 2026 04:18:59 +0000 (0:00:00.911) 0:00:22.379 ********** 2026-04-05 04:19:06.746728 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:19:06.746743 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:19:06.746754 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:19:06.746764 | orchestrator | 2026-04-05 04:19:06.746775 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-05 04:19:06.746786 | orchestrator | Sunday 05 April 2026 04:18:59 +0000 (0:00:00.413) 0:00:22.793 ********** 2026-04-05 04:19:06.746797 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:19:06.746808 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:19:06.746818 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:19:06.746829 | orchestrator | 2026-04-05 04:19:06.746840 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-05 04:19:06.746851 | orchestrator | Sunday 05 April 2026 04:18:59 +0000 (0:00:00.337) 0:00:23.130 ********** 2026-04-05 04:19:06.746862 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:19:06.746873 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:19:06.746883 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:19:06.746975 | orchestrator | 2026-04-05 04:19:06.746987 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 04:19:06.746998 | orchestrator | Sunday 05 April 2026 04:19:00 +0000 (0:00:00.651) 0:00:23.781 ********** 2026-04-05 04:19:06.747009 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:19:06.747020 | orchestrator | 2026-04-05 04:19:06.747031 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 04:19:06.747042 | orchestrator | Sunday 05 April 2026 04:19:00 +0000 (0:00:00.296) 0:00:24.078 ********** 2026-04-05 04:19:06.747053 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:19:06.747064 | orchestrator | 2026-04-05 04:19:06.747075 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 04:19:06.747085 | orchestrator | Sunday 05 April 2026 04:19:01 +0000 (0:00:00.291) 0:00:24.370 ********** 2026-04-05 04:19:06.747096 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:19:06.747107 | orchestrator | 2026-04-05 04:19:06.747118 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 04:19:06.747128 | orchestrator | Sunday 05 April 2026 04:19:02 +0000 (0:00:01.757) 0:00:26.127 ********** 2026-04-05 04:19:06.747140 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:19:06.747150 | orchestrator | 2026-04-05 04:19:06.747161 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 04:19:06.747172 | orchestrator | Sunday 05 April 2026 04:19:03 +0000 (0:00:00.308) 0:00:26.436 ********** 2026-04-05 04:19:06.747183 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:19:06.747194 | orchestrator | 2026-04-05 04:19:06.747226 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:19:06.747238 | orchestrator | Sunday 05 April 2026 04:19:03 +0000 (0:00:00.270) 0:00:26.707 ********** 2026-04-05 04:19:06.747249 | orchestrator | 2026-04-05 04:19:06.747260 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:19:06.747271 | orchestrator | Sunday 05 April 2026 04:19:03 +0000 (0:00:00.075) 0:00:26.782 ********** 2026-04-05 04:19:06.747282 | orchestrator | 2026-04-05 04:19:06.747293 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 04:19:06.747303 | orchestrator | Sunday 05 April 2026 04:19:03 +0000 (0:00:00.075) 0:00:26.858 ********** 2026-04-05 04:19:06.747314 | orchestrator | 2026-04-05 04:19:06.747325 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 04:19:06.747336 | orchestrator | Sunday 05 April 2026 04:19:03 +0000 (0:00:00.076) 0:00:26.934 ********** 2026-04-05 04:19:06.747358 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 04:19:06.747369 | orchestrator | 2026-04-05 04:19:06.747380 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 04:19:06.747391 | orchestrator | Sunday 05 April 2026 04:19:05 +0000 (0:00:01.721) 0:00:28.656 ********** 2026-04-05 04:19:06.747409 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-05 04:19:06.747421 | orchestrator |  "msg": [ 2026-04-05 04:19:06.747433 | orchestrator |  "Validator run completed.", 2026-04-05 04:19:06.747444 | orchestrator |  "You can find the report file here:", 2026-04-05 04:19:06.747455 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-05T04:18:38+00:00-report.json", 2026-04-05 04:19:06.747467 | orchestrator |  "on the following host:", 2026-04-05 04:19:06.747478 | orchestrator |  "testbed-manager" 2026-04-05 04:19:06.747501 | orchestrator |  ] 2026-04-05 04:19:06.747512 | orchestrator | } 2026-04-05 04:19:06.747524 | orchestrator | 2026-04-05 04:19:06.747535 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:19:06.747547 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 04:19:06.747558 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 04:19:06.747569 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 04:19:06.747580 | orchestrator | 2026-04-05 04:19:06.747591 | orchestrator | 2026-04-05 04:19:06.747602 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:19:06.747613 | orchestrator | Sunday 05 April 2026 04:19:06 +0000 (0:00:00.977) 0:00:29.634 ********** 2026-04-05 04:19:06.747624 | orchestrator | =============================================================================== 2026-04-05 04:19:06.747635 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.70s 2026-04-05 04:19:06.747646 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.79s 2026-04-05 04:19:06.747657 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-04-05 04:19:06.747667 | orchestrator | Write report file ------------------------------------------------------- 1.72s 2026-04-05 04:19:06.747678 | orchestrator | Get timestamp for report file ------------------------------------------- 1.01s 2026-04-05 04:19:06.747688 | orchestrator | Print report file information ------------------------------------------- 0.98s 2026-04-05 04:19:06.747699 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.91s 2026-04-05 04:19:06.747710 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.90s 2026-04-05 04:19:06.747721 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.87s 2026-04-05 04:19:06.747731 | orchestrator | Create report output directory ------------------------------------------ 0.77s 2026-04-05 04:19:06.747742 | orchestrator | Set test result to passed if all containers are running ----------------- 0.66s 2026-04-05 04:19:06.747753 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.65s 2026-04-05 04:19:06.747764 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.65s 2026-04-05 04:19:06.747775 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.62s 2026-04-05 04:19:06.747785 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.60s 2026-04-05 04:19:06.747796 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.60s 2026-04-05 04:19:06.747807 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-04-05 04:19:06.747818 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-04-05 04:19:06.747835 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.55s 2026-04-05 04:19:06.747846 | orchestrator | Calculate sub test expression results ----------------------------------- 0.41s 2026-04-05 04:19:07.107019 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-05 04:19:07.114507 | orchestrator | + set -e 2026-04-05 04:19:07.114586 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 04:19:07.116695 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 04:19:07.116729 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 04:19:07.116735 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 04:19:07.116740 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 04:19:07.116745 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 04:19:07.116751 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 04:19:07.116755 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:19:07.116760 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:19:07.116765 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 04:19:07.116769 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 04:19:07.116774 | orchestrator | ++ export ARA=false 2026-04-05 04:19:07.116779 | orchestrator | ++ ARA=false 2026-04-05 04:19:07.116783 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 04:19:07.116788 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 04:19:07.116792 | orchestrator | ++ export TEMPEST=false 2026-04-05 04:19:07.116796 | orchestrator | ++ TEMPEST=false 2026-04-05 04:19:07.116800 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 04:19:07.116805 | orchestrator | ++ IS_ZUUL=true 2026-04-05 04:19:07.116809 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:19:07.116814 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:19:07.116818 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 04:19:07.116822 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 04:19:07.116826 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 04:19:07.116831 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 04:19:07.116835 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 04:19:07.116840 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 04:19:07.116844 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 04:19:07.116848 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 04:19:07.116853 | orchestrator | + source /etc/os-release 2026-04-05 04:19:07.116857 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-05 04:19:07.116861 | orchestrator | ++ NAME=Ubuntu 2026-04-05 04:19:07.116865 | orchestrator | ++ VERSION_ID=24.04 2026-04-05 04:19:07.116870 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-05 04:19:07.116874 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-05 04:19:07.116878 | orchestrator | ++ ID=ubuntu 2026-04-05 04:19:07.116882 | orchestrator | ++ ID_LIKE=debian 2026-04-05 04:19:07.116887 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-05 04:19:07.116920 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-05 04:19:07.116925 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-05 04:19:07.116929 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-05 04:19:07.116934 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-05 04:19:07.116939 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-05 04:19:07.116943 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-05 04:19:07.116949 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-05 04:19:07.116954 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-05 04:19:07.150285 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-05 04:19:33.450608 | orchestrator | 2026-04-05 04:19:33.450696 | orchestrator | # Status of Elasticsearch 2026-04-05 04:19:33.450713 | orchestrator | 2026-04-05 04:19:33.450737 | orchestrator | + pushd /opt/configuration/contrib 2026-04-05 04:19:33.450746 | orchestrator | + echo 2026-04-05 04:19:33.450760 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-05 04:19:33.450766 | orchestrator | + echo 2026-04-05 04:19:33.450773 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-05 04:19:33.650603 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-05 04:19:33.650829 | orchestrator | 2026-04-05 04:19:33.650847 | orchestrator | # Status of MariaDB 2026-04-05 04:19:33.650855 | orchestrator | 2026-04-05 04:19:33.650862 | orchestrator | + echo 2026-04-05 04:19:33.650869 | orchestrator | + echo '# Status of MariaDB' 2026-04-05 04:19:33.650876 | orchestrator | + echo 2026-04-05 04:19:33.650883 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-05 04:19:33.707982 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 04:19:33.708085 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-05 04:19:33.708110 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-05 04:19:33.708134 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-05 04:19:33.783803 | orchestrator | Reading package lists... 2026-04-05 04:19:34.217597 | orchestrator | Building dependency tree... 2026-04-05 04:19:34.218099 | orchestrator | Reading state information... 2026-04-05 04:19:34.743212 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-05 04:19:34.743318 | orchestrator | bc set to manually installed. 2026-04-05 04:19:34.743334 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-04-05 04:19:35.509559 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-05 04:19:35.509705 | orchestrator | 2026-04-05 04:19:35.509718 | orchestrator | # Status of Prometheus 2026-04-05 04:19:35.509726 | orchestrator | + echo 2026-04-05 04:19:35.509732 | orchestrator | + echo '# Status of Prometheus' 2026-04-05 04:19:35.509739 | orchestrator | + echo 2026-04-05 04:19:35.509745 | orchestrator | 2026-04-05 04:19:35.509752 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-05 04:19:35.578821 | orchestrator | Unauthorized 2026-04-05 04:19:35.582279 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-05 04:19:35.670253 | orchestrator | Unauthorized 2026-04-05 04:19:35.675569 | orchestrator | 2026-04-05 04:19:35.675696 | orchestrator | # Status of RabbitMQ 2026-04-05 04:19:35.675726 | orchestrator | 2026-04-05 04:19:35.675745 | orchestrator | + echo 2026-04-05 04:19:35.675765 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-05 04:19:35.675785 | orchestrator | + echo 2026-04-05 04:19:35.676277 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-05 04:19:35.745770 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 04:19:35.745870 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-05 04:19:35.745934 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-05 04:19:36.328659 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-05 04:19:36.344609 | orchestrator | 2026-04-05 04:19:36.344710 | orchestrator | # Status of Redis 2026-04-05 04:19:36.344725 | orchestrator | 2026-04-05 04:19:36.344732 | orchestrator | + echo 2026-04-05 04:19:36.344738 | orchestrator | + echo '# Status of Redis' 2026-04-05 04:19:36.344744 | orchestrator | + echo 2026-04-05 04:19:36.344752 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-05 04:19:36.351500 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002378s;;;0.000000;10.000000 2026-04-05 04:19:36.352368 | orchestrator | 2026-04-05 04:19:36.352406 | orchestrator | # Create backup of MariaDB database 2026-04-05 04:19:36.352420 | orchestrator | 2026-04-05 04:19:36.352430 | orchestrator | + popd 2026-04-05 04:19:36.352441 | orchestrator | + echo 2026-04-05 04:19:36.352452 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-05 04:19:36.352464 | orchestrator | + echo 2026-04-05 04:19:36.352475 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-05 04:19:38.625560 | orchestrator | 2026-04-05 04:19:38 | INFO  | Task 32739206-7550-40a5-9321-45ad06e917cf (mariadb_backup) was prepared for execution. 2026-04-05 04:19:38.625659 | orchestrator | 2026-04-05 04:19:38 | INFO  | It takes a moment until task 32739206-7550-40a5-9321-45ad06e917cf (mariadb_backup) has been started and output is visible here. 2026-04-05 04:21:31.859033 | orchestrator | 2026-04-05 04:21:31.859155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:21:31.859184 | orchestrator | 2026-04-05 04:21:31.859228 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:21:31.859250 | orchestrator | Sunday 05 April 2026 04:19:43 +0000 (0:00:00.204) 0:00:00.204 ********** 2026-04-05 04:21:31.859270 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:21:31.859310 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:21:31.859321 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:21:31.859332 | orchestrator | 2026-04-05 04:21:31.859343 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:21:31.859354 | orchestrator | Sunday 05 April 2026 04:19:43 +0000 (0:00:00.372) 0:00:00.577 ********** 2026-04-05 04:21:31.859365 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 04:21:31.859377 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 04:21:31.859387 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 04:21:31.859398 | orchestrator | 2026-04-05 04:21:31.859408 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 04:21:31.859419 | orchestrator | 2026-04-05 04:21:31.859430 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 04:21:31.859440 | orchestrator | Sunday 05 April 2026 04:19:44 +0000 (0:00:00.717) 0:00:01.294 ********** 2026-04-05 04:21:31.859451 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 04:21:31.859462 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 04:21:31.859472 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 04:21:31.859483 | orchestrator | 2026-04-05 04:21:31.859497 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 04:21:31.859522 | orchestrator | Sunday 05 April 2026 04:19:44 +0000 (0:00:00.450) 0:00:01.745 ********** 2026-04-05 04:21:31.859539 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:21:31.859559 | orchestrator | 2026-04-05 04:21:31.859577 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-05 04:21:31.859597 | orchestrator | Sunday 05 April 2026 04:19:45 +0000 (0:00:00.646) 0:00:02.392 ********** 2026-04-05 04:21:31.859617 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:21:31.859635 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:21:31.859655 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:21:31.859675 | orchestrator | 2026-04-05 04:21:31.859695 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-05 04:21:31.859711 | orchestrator | Sunday 05 April 2026 04:19:49 +0000 (0:00:03.615) 0:00:06.007 ********** 2026-04-05 04:21:31.859730 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 04:21:31.859748 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-05 04:21:31.859768 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 04:21:31.859789 | orchestrator | mariadb_bootstrap_restart 2026-04-05 04:21:31.859866 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:21:31.859914 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:21:31.859927 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:21:31.859946 | orchestrator | 2026-04-05 04:21:31.859963 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 04:21:31.859981 | orchestrator | skipping: no hosts matched 2026-04-05 04:21:31.859999 | orchestrator | 2026-04-05 04:21:31.860018 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 04:21:31.860038 | orchestrator | skipping: no hosts matched 2026-04-05 04:21:31.860055 | orchestrator | 2026-04-05 04:21:31.860070 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 04:21:31.860081 | orchestrator | skipping: no hosts matched 2026-04-05 04:21:31.860091 | orchestrator | 2026-04-05 04:21:31.860102 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 04:21:31.860113 | orchestrator | 2026-04-05 04:21:31.860124 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 04:21:31.860135 | orchestrator | Sunday 05 April 2026 04:21:30 +0000 (0:01:41.258) 0:01:47.266 ********** 2026-04-05 04:21:31.860145 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:21:31.860156 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:21:31.860180 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:21:31.860203 | orchestrator | 2026-04-05 04:21:31.860214 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 04:21:31.860225 | orchestrator | Sunday 05 April 2026 04:21:30 +0000 (0:00:00.313) 0:01:47.580 ********** 2026-04-05 04:21:31.860236 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:21:31.860246 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:21:31.860257 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:21:31.860268 | orchestrator | 2026-04-05 04:21:31.860278 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:21:31.860290 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:21:31.860302 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 04:21:31.860314 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 04:21:31.860324 | orchestrator | 2026-04-05 04:21:31.860335 | orchestrator | 2026-04-05 04:21:31.860346 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:21:31.860356 | orchestrator | Sunday 05 April 2026 04:21:31 +0000 (0:00:00.476) 0:01:48.056 ********** 2026-04-05 04:21:31.860367 | orchestrator | =============================================================================== 2026-04-05 04:21:31.860378 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 101.26s 2026-04-05 04:21:31.860412 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.62s 2026-04-05 04:21:31.860424 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-04-05 04:21:31.860434 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.65s 2026-04-05 04:21:31.860445 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.48s 2026-04-05 04:21:31.860456 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.45s 2026-04-05 04:21:31.860467 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-04-05 04:21:31.860477 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-04-05 04:21:32.254372 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-05 04:21:32.266479 | orchestrator | + set -e 2026-04-05 04:21:32.266568 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:21:32.267530 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:21:32.267590 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:21:32.267609 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:21:32.267626 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:21:32.267644 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 04:21:32.269810 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:21:32.277585 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:21:32.277660 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:21:32.277674 | orchestrator | + export OS_CLOUD=admin 2026-04-05 04:21:32.277684 | orchestrator | + OS_CLOUD=admin 2026-04-05 04:21:32.277693 | orchestrator | + echo 2026-04-05 04:21:32.277703 | orchestrator | 2026-04-05 04:21:32.277712 | orchestrator | # OpenStack endpoints 2026-04-05 04:21:32.277721 | orchestrator | 2026-04-05 04:21:32.277730 | orchestrator | + echo '# OpenStack endpoints' 2026-04-05 04:21:32.277739 | orchestrator | + echo 2026-04-05 04:21:32.277748 | orchestrator | + openstack endpoint list 2026-04-05 04:21:35.933833 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 04:21:35.934000 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-05 04:21:35.934077 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 04:21:35.934112 | orchestrator | | 07b7e6164bc94abfb5d60f8152956714 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-05 04:21:35.934123 | orchestrator | | 0cc86fa669ff483a9bf473cd7990230b | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-05 04:21:35.934134 | orchestrator | | 15be6b3ed327479d8433ee3710783003 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-05 04:21:35.934145 | orchestrator | | 1f501fe843ca40b59d0c9f2764fb8216 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-05 04:21:35.934156 | orchestrator | | 3bd2c56d2eb74f84b51cae6d4ffb391c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-05 04:21:35.934166 | orchestrator | | 46a2c57c38894c8e8363ccfa8225cf7c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-05 04:21:35.934178 | orchestrator | | 49dfaf7fd9aa40a0bb118130e5a3313a | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-05 04:21:35.934189 | orchestrator | | 55d7ed0d8b1d4a07b78cc21067d1e405 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-05 04:21:35.934199 | orchestrator | | 58ed792da7ad4db0a8a26bf8e4fe496d | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-05 04:21:35.934210 | orchestrator | | 5bd814c8eda54e8db61160aa23f99282 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-05 04:21:35.934221 | orchestrator | | 6575a959ef474dcfb35e941f4ffbe453 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-05 04:21:35.934232 | orchestrator | | 6f6e3a9fd2df42848901e745d98dcf21 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 04:21:35.934242 | orchestrator | | 74170d993deb41cbbb56519d64337c31 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-05 04:21:35.934253 | orchestrator | | 80596c54a1994079ac084ae17fd054b1 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-05 04:21:35.934264 | orchestrator | | 8bec892317e84be19a8ec5b6d0991c7f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 04:21:35.934274 | orchestrator | | 96e8397f1d064a0cbef5331c2107172b | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-05 04:21:35.934285 | orchestrator | | 99d5072493f84c21814b3c29aba36a1d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 04:21:35.934296 | orchestrator | | a0f545351afe430ebf500758abcef159 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-05 04:21:35.934306 | orchestrator | | a51b5bac1d604ec2ab1ed244112690d3 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-05 04:21:35.934317 | orchestrator | | b12ee1474589431b8bace137775d3c62 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-05 04:21:35.934355 | orchestrator | | bc4ecfa9d29449a897ccec79007e5167 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-05 04:21:35.934376 | orchestrator | | c3d9559bcb3b49769ea704732acd210c | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-05 04:21:35.934390 | orchestrator | | c5cb2d6adc1e47479e61b8b2f12cb10b | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-05 04:21:35.934403 | orchestrator | | ca52d0113823466caf1ff7a2ee1c5b95 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 04:21:35.934416 | orchestrator | | cdf4b41e16ff405db61559031c9edcf9 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-05 04:21:35.934430 | orchestrator | | d2eef68c15e947d79b96d624efbe6ce6 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-05 04:21:35.934442 | orchestrator | | de7ce786dd324602be6db256eb948d49 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-05 04:21:35.934452 | orchestrator | | e9a534245e3a4dd08b3722a84b0cc502 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-05 04:21:35.934463 | orchestrator | | f1b7f0a33b6f4906b1baf2d85452abc8 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-05 04:21:35.934474 | orchestrator | | f9396148a41343f592f9a78ffc011d45 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-05 04:21:35.934485 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 04:21:36.313895 | orchestrator | 2026-04-05 04:21:36.313994 | orchestrator | # Cinder 2026-04-05 04:21:36.314010 | orchestrator | 2026-04-05 04:21:36.314080 | orchestrator | + echo 2026-04-05 04:21:36.314089 | orchestrator | + echo '# Cinder' 2026-04-05 04:21:36.314096 | orchestrator | + echo 2026-04-05 04:21:36.314103 | orchestrator | + openstack volume service list 2026-04-05 04:21:39.118398 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 04:21:39.118494 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 04:21:39.118506 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 04:21:39.118515 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T04:21:31.000000 | 2026-04-05 04:21:39.118523 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T04:21:31.000000 | 2026-04-05 04:21:39.118531 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T04:21:31.000000 | 2026-04-05 04:21:39.118539 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-05T04:21:31.000000 | 2026-04-05 04:21:39.118547 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-05T04:21:38.000000 | 2026-04-05 04:21:39.118555 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-05T04:21:29.000000 | 2026-04-05 04:21:39.118563 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-05T04:21:31.000000 | 2026-04-05 04:21:39.118571 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-05T04:21:34.000000 | 2026-04-05 04:21:39.118597 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-05T04:21:34.000000 | 2026-04-05 04:21:39.118606 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 04:21:39.555298 | orchestrator | 2026-04-05 04:21:39.555378 | orchestrator | # Neutron 2026-04-05 04:21:39.555387 | orchestrator | 2026-04-05 04:21:39.555393 | orchestrator | + echo 2026-04-05 04:21:39.555399 | orchestrator | + echo '# Neutron' 2026-04-05 04:21:39.555407 | orchestrator | + echo 2026-04-05 04:21:39.555412 | orchestrator | + openstack network agent list 2026-04-05 04:21:42.395097 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 04:21:42.395199 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-05 04:21:42.395215 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 04:21:42.395227 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-05 04:21:42.395238 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-05 04:21:42.395269 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-05 04:21:42.395280 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-05 04:21:42.395291 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-05 04:21:42.395301 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-05 04:21:42.395312 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 04:21:42.395323 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 04:21:42.395334 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 04:21:42.395344 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 04:21:42.785843 | orchestrator | + openstack network service provider list 2026-04-05 04:21:45.766446 | orchestrator | +---------------+------+---------+ 2026-04-05 04:21:45.766553 | orchestrator | | Service Type | Name | Default | 2026-04-05 04:21:45.766566 | orchestrator | +---------------+------+---------+ 2026-04-05 04:21:45.766572 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-05 04:21:45.766577 | orchestrator | +---------------+------+---------+ 2026-04-05 04:21:46.298598 | orchestrator | 2026-04-05 04:21:46.298710 | orchestrator | # Nova 2026-04-05 04:21:46.298725 | orchestrator | 2026-04-05 04:21:46.298736 | orchestrator | + echo 2026-04-05 04:21:46.298747 | orchestrator | + echo '# Nova' 2026-04-05 04:21:46.298757 | orchestrator | + echo 2026-04-05 04:21:46.298767 | orchestrator | + openstack compute service list 2026-04-05 04:21:49.234583 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 04:21:49.234687 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 04:21:49.234702 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 04:21:49.234743 | orchestrator | | f775f423-0d69-496e-82f8-2fe2f6571662 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T04:21:42.000000 | 2026-04-05 04:21:49.234754 | orchestrator | | b6a43bca-176c-41b8-aa16-1d48be363599 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T04:21:48.000000 | 2026-04-05 04:21:49.234765 | orchestrator | | 81060e8f-282f-4e0b-b94e-0cd362086160 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T04:21:39.000000 | 2026-04-05 04:21:49.234777 | orchestrator | | 1852e77e-a7f0-49ed-9780-af6d3674f8a5 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-05T04:21:46.000000 | 2026-04-05 04:21:49.234790 | orchestrator | | 91486a54-2e88-4c50-80f5-51c1889fe10a | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-05T04:21:48.000000 | 2026-04-05 04:21:49.234801 | orchestrator | | f8cbd1ee-48a4-48d6-9b6f-afbc2ff571e8 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-05T04:21:48.000000 | 2026-04-05 04:21:49.234812 | orchestrator | | b6bcc8b5-f1ad-4148-825a-ef100b1636e2 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-05T04:21:40.000000 | 2026-04-05 04:21:49.234823 | orchestrator | | 9350d856-2adf-49c7-81fe-2646d0965852 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-05T04:21:41.000000 | 2026-04-05 04:21:49.234834 | orchestrator | | 52112d3f-4aed-4606-9b90-0b3f50064b89 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-05T04:21:41.000000 | 2026-04-05 04:21:49.234845 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 04:21:49.725913 | orchestrator | + openstack hypervisor list 2026-04-05 04:21:52.896741 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 04:21:52.896815 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-05 04:21:52.896822 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 04:21:52.896827 | orchestrator | | 52ba7715-3dab-4ac8-af6d-d34d4eeee8c7 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-05 04:21:52.896831 | orchestrator | | 125c80d6-25de-43cd-9687-0e659acf3d20 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-05 04:21:52.896835 | orchestrator | | 09b59826-c511-4ca1-8094-cc59cdf53dd4 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-05 04:21:52.896839 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 04:21:53.330995 | orchestrator | 2026-04-05 04:21:53.331079 | orchestrator | # Run OpenStack test play 2026-04-05 04:21:53.331091 | orchestrator | 2026-04-05 04:21:53.331098 | orchestrator | + echo 2026-04-05 04:21:53.331105 | orchestrator | + echo '# Run OpenStack test play' 2026-04-05 04:21:53.331116 | orchestrator | + echo 2026-04-05 04:21:53.331123 | orchestrator | + osism apply --environment openstack test 2026-04-05 04:21:55.493402 | orchestrator | 2026-04-05 04:21:55 | INFO  | Trying to run play test in environment openstack 2026-04-05 04:22:05.577700 | orchestrator | 2026-04-05 04:22:05 | INFO  | Task ef4ef648-2d16-49cc-89ea-40490d0ee7e7 (test) was prepared for execution. 2026-04-05 04:22:05.577822 | orchestrator | 2026-04-05 04:22:05 | INFO  | It takes a moment until task ef4ef648-2d16-49cc-89ea-40490d0ee7e7 (test) has been started and output is visible here. 2026-04-05 04:25:39.705435 | orchestrator | 2026-04-05 04:25:39.705525 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-05 04:25:39.705536 | orchestrator | 2026-04-05 04:25:39.705543 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-05 04:25:39.705551 | orchestrator | Sunday 05 April 2026 04:22:10 +0000 (0:00:00.077) 0:00:00.077 ********** 2026-04-05 04:25:39.705557 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705564 | orchestrator | 2026-04-05 04:25:39.705571 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-05 04:25:39.705577 | orchestrator | Sunday 05 April 2026 04:22:14 +0000 (0:00:04.130) 0:00:04.207 ********** 2026-04-05 04:25:39.705602 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705609 | orchestrator | 2026-04-05 04:25:39.705615 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-05 04:25:39.705621 | orchestrator | Sunday 05 April 2026 04:22:19 +0000 (0:00:04.837) 0:00:09.045 ********** 2026-04-05 04:25:39.705628 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705634 | orchestrator | 2026-04-05 04:25:39.705640 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-05 04:25:39.705646 | orchestrator | Sunday 05 April 2026 04:22:27 +0000 (0:00:08.072) 0:00:17.117 ********** 2026-04-05 04:25:39.705653 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705659 | orchestrator | 2026-04-05 04:25:39.705665 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-05 04:25:39.705671 | orchestrator | Sunday 05 April 2026 04:22:32 +0000 (0:00:04.929) 0:00:22.047 ********** 2026-04-05 04:25:39.705677 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705683 | orchestrator | 2026-04-05 04:25:39.705690 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-05 04:25:39.705696 | orchestrator | Sunday 05 April 2026 04:22:37 +0000 (0:00:05.081) 0:00:27.128 ********** 2026-04-05 04:25:39.705702 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-05 04:25:39.705709 | orchestrator | changed: [localhost] => (item=member) 2026-04-05 04:25:39.705716 | orchestrator | changed: [localhost] => (item=creator) 2026-04-05 04:25:39.705723 | orchestrator | 2026-04-05 04:25:39.705729 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-05 04:25:39.705735 | orchestrator | Sunday 05 April 2026 04:22:50 +0000 (0:00:13.223) 0:00:40.351 ********** 2026-04-05 04:25:39.705741 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705747 | orchestrator | 2026-04-05 04:25:39.705767 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-05 04:25:39.705773 | orchestrator | Sunday 05 April 2026 04:22:55 +0000 (0:00:04.776) 0:00:45.128 ********** 2026-04-05 04:25:39.705779 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705785 | orchestrator | 2026-04-05 04:25:39.705792 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-05 04:25:39.705798 | orchestrator | Sunday 05 April 2026 04:23:01 +0000 (0:00:05.461) 0:00:50.589 ********** 2026-04-05 04:25:39.705804 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705810 | orchestrator | 2026-04-05 04:25:39.705816 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-05 04:25:39.705822 | orchestrator | Sunday 05 April 2026 04:23:05 +0000 (0:00:04.590) 0:00:55.180 ********** 2026-04-05 04:25:39.705874 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705882 | orchestrator | 2026-04-05 04:25:39.705889 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-05 04:25:39.705895 | orchestrator | Sunday 05 April 2026 04:23:10 +0000 (0:00:04.517) 0:00:59.698 ********** 2026-04-05 04:25:39.705901 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705907 | orchestrator | 2026-04-05 04:25:39.705913 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-05 04:25:39.705920 | orchestrator | Sunday 05 April 2026 04:23:14 +0000 (0:00:04.482) 0:01:04.180 ********** 2026-04-05 04:25:39.705926 | orchestrator | changed: [localhost] 2026-04-05 04:25:39.705932 | orchestrator | 2026-04-05 04:25:39.705948 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-05 04:25:39.705961 | orchestrator | Sunday 05 April 2026 04:23:19 +0000 (0:00:04.937) 0:01:09.117 ********** 2026-04-05 04:25:39.705968 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-05 04:25:39.705979 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-05 04:25:39.705988 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-05 04:25:39.706001 | orchestrator | 2026-04-05 04:25:39.706011 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-05 04:25:39.706114 | orchestrator | Sunday 05 April 2026 04:23:35 +0000 (0:00:16.161) 0:01:25.278 ********** 2026-04-05 04:25:39.706127 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-05 04:25:39.706138 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-05 04:25:39.706145 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-05 04:25:39.706151 | orchestrator | 2026-04-05 04:25:39.706158 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-05 04:25:39.706164 | orchestrator | Sunday 05 April 2026 04:23:53 +0000 (0:00:17.435) 0:01:42.714 ********** 2026-04-05 04:25:39.706170 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-05 04:25:39.706182 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-05 04:25:39.706188 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-05 04:25:39.706194 | orchestrator | 2026-04-05 04:25:39.706200 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-05 04:25:39.706207 | orchestrator | 2026-04-05 04:25:39.706213 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-05 04:25:39.706234 | orchestrator | Sunday 05 April 2026 04:24:26 +0000 (0:00:33.651) 0:02:16.365 ********** 2026-04-05 04:25:39.706241 | orchestrator | ok: [localhost] 2026-04-05 04:25:39.706247 | orchestrator | 2026-04-05 04:25:39.706254 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-05 04:25:39.706260 | orchestrator | Sunday 05 April 2026 04:24:31 +0000 (0:00:04.051) 0:02:20.417 ********** 2026-04-05 04:25:39.706267 | orchestrator | skipping: [localhost] 2026-04-05 04:25:39.706273 | orchestrator | 2026-04-05 04:25:39.706279 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-05 04:25:39.706285 | orchestrator | Sunday 05 April 2026 04:24:31 +0000 (0:00:00.056) 0:02:20.474 ********** 2026-04-05 04:25:39.706291 | orchestrator | skipping: [localhost] 2026-04-05 04:25:39.706298 | orchestrator | 2026-04-05 04:25:39.706304 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-05 04:25:39.706310 | orchestrator | Sunday 05 April 2026 04:24:31 +0000 (0:00:00.054) 0:02:20.529 ********** 2026-04-05 04:25:39.706316 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-05 04:25:39.706322 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-05 04:25:39.706328 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-05 04:25:39.706335 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-05 04:25:39.706341 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-05 04:25:39.706347 | orchestrator | skipping: [localhost] 2026-04-05 04:25:39.706353 | orchestrator | 2026-04-05 04:25:39.706359 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-05 04:25:39.706365 | orchestrator | Sunday 05 April 2026 04:24:31 +0000 (0:00:00.204) 0:02:20.733 ********** 2026-04-05 04:25:39.706372 | orchestrator | skipping: [localhost] 2026-04-05 04:25:39.706378 | orchestrator | 2026-04-05 04:25:39.706384 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-05 04:25:39.706390 | orchestrator | Sunday 05 April 2026 04:24:31 +0000 (0:00:00.172) 0:02:20.906 ********** 2026-04-05 04:25:39.706396 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 04:25:39.706402 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 04:25:39.706409 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 04:25:39.706415 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 04:25:39.706427 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 04:25:39.706433 | orchestrator | 2026-04-05 04:25:39.706439 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-05 04:25:39.706445 | orchestrator | Sunday 05 April 2026 04:24:36 +0000 (0:00:05.302) 0:02:26.209 ********** 2026-04-05 04:25:39.706452 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-05 04:25:39.706459 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-05 04:25:39.706465 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-05 04:25:39.706471 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-05 04:25:39.706479 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j18988582082.3834', 'results_file': '/ansible/.ansible_async/j18988582082.3834', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:25:39.706487 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-05 04:25:39.706493 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j588440742029.3859', 'results_file': '/ansible/.ansible_async/j588440742029.3859', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:25:39.706501 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j539360146263.3884', 'results_file': '/ansible/.ansible_async/j539360146263.3884', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:25:39.706507 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j590916886726.3909', 'results_file': '/ansible/.ansible_async/j590916886726.3909', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:25:39.706517 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j102333871727.3934', 'results_file': '/ansible/.ansible_async/j102333871727.3934', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:25:39.706524 | orchestrator | 2026-04-05 04:25:39.706530 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-05 04:25:39.706536 | orchestrator | Sunday 05 April 2026 04:25:34 +0000 (0:00:57.954) 0:03:24.163 ********** 2026-04-05 04:25:39.706547 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 04:26:53.564585 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 04:26:53.564691 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 04:26:53.564709 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 04:26:53.564717 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 04:26:53.564724 | orchestrator | 2026-04-05 04:26:53.564732 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-05 04:26:53.564739 | orchestrator | Sunday 05 April 2026 04:25:39 +0000 (0:00:04.935) 0:03:29.099 ********** 2026-04-05 04:26:53.564746 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-05 04:26:53.564756 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j867403137204.4044', 'results_file': '/ansible/.ansible_async/j867403137204.4044', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.564766 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j256837768097.4069', 'results_file': '/ansible/.ansible_async/j256837768097.4069', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.564791 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j845460256592.4094', 'results_file': '/ansible/.ansible_async/j845460256592.4094', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.564799 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j951189774843.4119', 'results_file': '/ansible/.ansible_async/j951189774843.4119', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.564806 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j427331697927.4144', 'results_file': '/ansible/.ansible_async/j427331697927.4144', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.564867 | orchestrator | 2026-04-05 04:26:53.564881 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-05 04:26:53.564892 | orchestrator | Sunday 05 April 2026 04:25:49 +0000 (0:00:09.956) 0:03:39.056 ********** 2026-04-05 04:26:53.564903 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 04:26:53.564913 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 04:26:53.564924 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 04:26:53.564935 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 04:26:53.564946 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 04:26:53.564957 | orchestrator | 2026-04-05 04:26:53.564969 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-05 04:26:53.564980 | orchestrator | Sunday 05 April 2026 04:25:55 +0000 (0:00:05.627) 0:03:44.683 ********** 2026-04-05 04:26:53.564991 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-05 04:26:53.565002 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j701153616489.4213', 'results_file': '/ansible/.ansible_async/j701153616489.4213', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.565014 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j153963697407.4238', 'results_file': '/ansible/.ansible_async/j153963697407.4238', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.565026 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j9533643809.4271', 'results_file': '/ansible/.ansible_async/j9533643809.4271', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.565051 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j868107787871.4297', 'results_file': '/ansible/.ansible_async/j868107787871.4297', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.565076 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j13764843879.4323', 'results_file': '/ansible/.ansible_async/j13764843879.4323', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 04:26:53.565083 | orchestrator | 2026-04-05 04:26:53.565090 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-05 04:26:53.565097 | orchestrator | Sunday 05 April 2026 04:26:05 +0000 (0:00:10.610) 0:03:55.294 ********** 2026-04-05 04:26:53.565111 | orchestrator | changed: [localhost] 2026-04-05 04:26:53.565119 | orchestrator | 2026-04-05 04:26:53.565126 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-05 04:26:53.565132 | orchestrator | Sunday 05 April 2026 04:26:12 +0000 (0:00:06.547) 0:04:01.841 ********** 2026-04-05 04:26:53.565140 | orchestrator | changed: [localhost] 2026-04-05 04:26:53.565151 | orchestrator | 2026-04-05 04:26:53.565161 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-05 04:26:53.565180 | orchestrator | Sunday 05 April 2026 04:26:26 +0000 (0:00:14.208) 0:04:16.050 ********** 2026-04-05 04:26:53.565192 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 04:26:53.565203 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 04:26:53.565214 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 04:26:53.565224 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 04:26:53.565236 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 04:26:53.565247 | orchestrator | 2026-04-05 04:26:53.565259 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-05 04:26:53.565271 | orchestrator | Sunday 05 April 2026 04:26:53 +0000 (0:00:26.470) 0:04:42.520 ********** 2026-04-05 04:26:53.565283 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-05 04:26:53.565294 | orchestrator |  "msg": "test: 192.168.112.191" 2026-04-05 04:26:53.565306 | orchestrator | } 2026-04-05 04:26:53.565317 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-05 04:26:53.565329 | orchestrator |  "msg": "test-1: 192.168.112.105" 2026-04-05 04:26:53.565341 | orchestrator | } 2026-04-05 04:26:53.565351 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-05 04:26:53.565362 | orchestrator |  "msg": "test-2: 192.168.112.170" 2026-04-05 04:26:53.565372 | orchestrator | } 2026-04-05 04:26:53.565384 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-05 04:26:53.565395 | orchestrator |  "msg": "test-3: 192.168.112.137" 2026-04-05 04:26:53.565407 | orchestrator | } 2026-04-05 04:26:53.565418 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-05 04:26:53.565431 | orchestrator |  "msg": "test-4: 192.168.112.180" 2026-04-05 04:26:53.565444 | orchestrator | } 2026-04-05 04:26:53.565455 | orchestrator | 2026-04-05 04:26:53.565468 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:26:53.565482 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 04:26:53.565495 | orchestrator | 2026-04-05 04:26:53.565507 | orchestrator | 2026-04-05 04:26:53.565520 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:26:53.565533 | orchestrator | Sunday 05 April 2026 04:26:53 +0000 (0:00:00.134) 0:04:42.655 ********** 2026-04-05 04:26:53.565545 | orchestrator | =============================================================================== 2026-04-05 04:26:53.565556 | orchestrator | Wait for instance creation to complete --------------------------------- 57.95s 2026-04-05 04:26:53.565569 | orchestrator | Create test routers ---------------------------------------------------- 33.65s 2026-04-05 04:26:53.565581 | orchestrator | Create floating ip addresses ------------------------------------------- 26.47s 2026-04-05 04:26:53.565593 | orchestrator | Create test subnets ---------------------------------------------------- 17.44s 2026-04-05 04:26:53.565605 | orchestrator | Create test networks --------------------------------------------------- 16.16s 2026-04-05 04:26:53.565617 | orchestrator | Attach test volume ----------------------------------------------------- 14.21s 2026-04-05 04:26:53.565629 | orchestrator | Add member roles to user test ------------------------------------------ 13.22s 2026-04-05 04:26:53.565641 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.61s 2026-04-05 04:26:53.565653 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.96s 2026-04-05 04:26:53.565675 | orchestrator | Add manager role to user test-admin ------------------------------------- 8.07s 2026-04-05 04:26:53.565687 | orchestrator | Create test volume ------------------------------------------------------ 6.55s 2026-04-05 04:26:53.565699 | orchestrator | Add tag to instances ---------------------------------------------------- 5.63s 2026-04-05 04:26:53.565711 | orchestrator | Create ssh security group ----------------------------------------------- 5.46s 2026-04-05 04:26:53.565723 | orchestrator | Create test instances --------------------------------------------------- 5.30s 2026-04-05 04:26:53.565735 | orchestrator | Create test user -------------------------------------------------------- 5.08s 2026-04-05 04:26:53.565748 | orchestrator | Create test keypair ----------------------------------------------------- 4.94s 2026-04-05 04:26:53.565759 | orchestrator | Add metadata to instances ----------------------------------------------- 4.94s 2026-04-05 04:26:53.565771 | orchestrator | Create test project ----------------------------------------------------- 4.93s 2026-04-05 04:26:53.565790 | orchestrator | Create test-admin user -------------------------------------------------- 4.84s 2026-04-05 04:26:53.565802 | orchestrator | Create test server group ------------------------------------------------ 4.78s 2026-04-05 04:26:53.955661 | orchestrator | + server_list 2026-04-05 04:26:53.955786 | orchestrator | + openstack --os-cloud test server list 2026-04-05 04:26:57.914744 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 04:26:57.914864 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-05 04:26:57.914876 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 04:26:57.914883 | orchestrator | | 0b1222a5-f3cb-41ab-92be-469ce301e9ad | test-3 | ACTIVE | test-2=192.168.112.137, 192.168.201.118 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 04:26:57.914889 | orchestrator | | ecfbad1d-1672-44a0-9b9d-e8da7a8a2f92 | test-4 | ACTIVE | test-3=192.168.112.180, 192.168.202.73 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 04:26:57.914896 | orchestrator | | 0a2bbed2-8c42-43cd-8046-2895ded493c5 | test-2 | ACTIVE | test-2=192.168.112.170, 192.168.201.216 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 04:26:57.914901 | orchestrator | | 097c8630-3aa1-452f-8464-e68c61053ff7 | test | ACTIVE | test-1=192.168.112.191, 192.168.200.66 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 04:26:57.914907 | orchestrator | | aa0a3eb0-7fe8-4d0b-b428-c7baecf5448f | test-1 | ACTIVE | test-1=192.168.112.105, 192.168.200.251 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 04:26:57.914913 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 04:26:58.262504 | orchestrator | + openstack --os-cloud test server show test 2026-04-05 04:27:02.253638 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:02.253783 | orchestrator | | Field | Value | 2026-04-05 04:27:02.253842 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:02.253885 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 04:27:02.253898 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 04:27:02.253909 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 04:27:02.253921 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-05 04:27:02.253933 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 04:27:02.253978 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 04:27:02.254014 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 04:27:02.254095 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 04:27:02.254109 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 04:27:02.254138 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 04:27:02.254152 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 04:27:02.254175 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 04:27:02.254190 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 04:27:02.254208 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 04:27:02.254222 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 04:27:02.254236 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:08.000000 | 2026-04-05 04:27:02.254259 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 04:27:02.254273 | orchestrator | | accessIPv4 | | 2026-04-05 04:27:02.254284 | orchestrator | | accessIPv6 | | 2026-04-05 04:27:02.254304 | orchestrator | | addresses | test-1=192.168.112.191, 192.168.200.66 | 2026-04-05 04:27:02.254315 | orchestrator | | config_drive | | 2026-04-05 04:27:02.254326 | orchestrator | | created | 2026-04-05T04:24:41Z | 2026-04-05 04:27:02.254337 | orchestrator | | description | None | 2026-04-05 04:27:02.254353 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 04:27:02.254365 | orchestrator | | hostId | 51219fb0b745a037ca58054f81e361499c4bce5c336f51de418af919 | 2026-04-05 04:27:02.254376 | orchestrator | | host_status | None | 2026-04-05 04:27:02.254394 | orchestrator | | id | 097c8630-3aa1-452f-8464-e68c61053ff7 | 2026-04-05 04:27:02.254406 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 04:27:02.254424 | orchestrator | | key_name | test | 2026-04-05 04:27:02.254435 | orchestrator | | locked | False | 2026-04-05 04:27:02.254446 | orchestrator | | locked_reason | None | 2026-04-05 04:27:02.254457 | orchestrator | | name | test | 2026-04-05 04:27:02.254473 | orchestrator | | pinned_availability_zone | None | 2026-04-05 04:27:02.254484 | orchestrator | | progress | 0 | 2026-04-05 04:27:02.254495 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 04:27:02.254507 | orchestrator | | properties | hostname='test' | 2026-04-05 04:27:02.254525 | orchestrator | | security_groups | name='icmp' | 2026-04-05 04:27:02.254544 | orchestrator | | | name='ssh' | 2026-04-05 04:27:02.254556 | orchestrator | | server_groups | None | 2026-04-05 04:27:02.254567 | orchestrator | | status | ACTIVE | 2026-04-05 04:27:02.254578 | orchestrator | | tags | test | 2026-04-05 04:27:02.254589 | orchestrator | | trusted_image_certificates | None | 2026-04-05 04:27:02.254612 | orchestrator | | updated | 2026-04-05T04:25:41Z | 2026-04-05 04:27:02.254623 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 04:27:02.254634 | orchestrator | | volumes_attached | delete_on_termination='True', id='01b028e0-fdbe-4ff7-8601-c3cc2494aae0' | 2026-04-05 04:27:02.254646 | orchestrator | | | delete_on_termination='False', id='f2d72a2c-7635-430f-aad8-fdc622687c0d' | 2026-04-05 04:27:02.257784 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:02.627969 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-05 04:27:05.893785 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:05.893939 | orchestrator | | Field | Value | 2026-04-05 04:27:05.893960 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:05.893972 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 04:27:05.893984 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 04:27:05.894012 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 04:27:05.894081 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-05 04:27:05.894093 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 04:27:05.894124 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 04:27:05.894157 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 04:27:05.894177 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 04:27:05.894198 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 04:27:05.894227 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 04:27:05.894249 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 04:27:05.894267 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 04:27:05.894286 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 04:27:05.894303 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 04:27:05.894322 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 04:27:05.894357 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:10.000000 | 2026-04-05 04:27:05.894390 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 04:27:05.894410 | orchestrator | | accessIPv4 | | 2026-04-05 04:27:05.894432 | orchestrator | | accessIPv6 | | 2026-04-05 04:27:05.894453 | orchestrator | | addresses | test-1=192.168.112.105, 192.168.200.251 | 2026-04-05 04:27:05.894473 | orchestrator | | config_drive | | 2026-04-05 04:27:05.894497 | orchestrator | | created | 2026-04-05T04:24:41Z | 2026-04-05 04:27:05.894514 | orchestrator | | description | None | 2026-04-05 04:27:05.894526 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 04:27:05.894545 | orchestrator | | hostId | 51219fb0b745a037ca58054f81e361499c4bce5c336f51de418af919 | 2026-04-05 04:27:05.894556 | orchestrator | | host_status | None | 2026-04-05 04:27:05.894575 | orchestrator | | id | aa0a3eb0-7fe8-4d0b-b428-c7baecf5448f | 2026-04-05 04:27:05.894587 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 04:27:05.894598 | orchestrator | | key_name | test | 2026-04-05 04:27:05.894609 | orchestrator | | locked | False | 2026-04-05 04:27:05.894620 | orchestrator | | locked_reason | None | 2026-04-05 04:27:05.894632 | orchestrator | | name | test-1 | 2026-04-05 04:27:05.894648 | orchestrator | | pinned_availability_zone | None | 2026-04-05 04:27:05.894667 | orchestrator | | progress | 0 | 2026-04-05 04:27:05.894678 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 04:27:05.894689 | orchestrator | | properties | hostname='test-1' | 2026-04-05 04:27:05.894707 | orchestrator | | security_groups | name='icmp' | 2026-04-05 04:27:05.894719 | orchestrator | | | name='ssh' | 2026-04-05 04:27:05.894730 | orchestrator | | server_groups | None | 2026-04-05 04:27:05.894741 | orchestrator | | status | ACTIVE | 2026-04-05 04:27:05.894752 | orchestrator | | tags | test | 2026-04-05 04:27:05.894763 | orchestrator | | trusted_image_certificates | None | 2026-04-05 04:27:05.894787 | orchestrator | | updated | 2026-04-05T04:25:41Z | 2026-04-05 04:27:05.894799 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 04:27:05.894851 | orchestrator | | volumes_attached | delete_on_termination='True', id='3620145d-6e82-46dc-a90f-d44818509785' | 2026-04-05 04:27:05.898905 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:06.240981 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-05 04:27:09.687204 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:09.687290 | orchestrator | | Field | Value | 2026-04-05 04:27:09.687302 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:09.687309 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 04:27:09.687316 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 04:27:09.687340 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 04:27:09.687359 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-05 04:27:09.687366 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 04:27:09.687373 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 04:27:09.687394 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 04:27:09.687402 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 04:27:09.687409 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 04:27:09.687416 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 04:27:09.687423 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 04:27:09.687430 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 04:27:09.687452 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 04:27:09.687459 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 04:27:09.687466 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 04:27:09.687473 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:09.000000 | 2026-04-05 04:27:09.687486 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 04:27:09.687493 | orchestrator | | accessIPv4 | | 2026-04-05 04:27:09.687500 | orchestrator | | accessIPv6 | | 2026-04-05 04:27:09.687507 | orchestrator | | addresses | test-2=192.168.112.170, 192.168.201.216 | 2026-04-05 04:27:09.687514 | orchestrator | | config_drive | | 2026-04-05 04:27:09.687525 | orchestrator | | created | 2026-04-05T04:24:42Z | 2026-04-05 04:27:09.687536 | orchestrator | | description | None | 2026-04-05 04:27:09.687543 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 04:27:09.687549 | orchestrator | | hostId | 85390d0c9d860f2e0b5bcc02028e5f686a22dac1f2f23e84108d4135 | 2026-04-05 04:27:09.687556 | orchestrator | | host_status | None | 2026-04-05 04:27:09.687568 | orchestrator | | id | 0a2bbed2-8c42-43cd-8046-2895ded493c5 | 2026-04-05 04:27:09.687576 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 04:27:09.687582 | orchestrator | | key_name | test | 2026-04-05 04:27:09.687589 | orchestrator | | locked | False | 2026-04-05 04:27:09.687601 | orchestrator | | locked_reason | None | 2026-04-05 04:27:09.687608 | orchestrator | | name | test-2 | 2026-04-05 04:27:09.687615 | orchestrator | | pinned_availability_zone | None | 2026-04-05 04:27:09.687623 | orchestrator | | progress | 0 | 2026-04-05 04:27:09.687630 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 04:27:09.687637 | orchestrator | | properties | hostname='test-2' | 2026-04-05 04:27:09.687649 | orchestrator | | security_groups | name='icmp' | 2026-04-05 04:27:09.687656 | orchestrator | | | name='ssh' | 2026-04-05 04:27:09.687663 | orchestrator | | server_groups | None | 2026-04-05 04:27:09.688052 | orchestrator | | status | ACTIVE | 2026-04-05 04:27:09.688070 | orchestrator | | tags | test | 2026-04-05 04:27:09.688078 | orchestrator | | trusted_image_certificates | None | 2026-04-05 04:27:09.688087 | orchestrator | | updated | 2026-04-05T04:25:42Z | 2026-04-05 04:27:09.688094 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 04:27:09.688103 | orchestrator | | volumes_attached | delete_on_termination='True', id='7e5114bf-2f9d-4068-9186-5c9933f3258c' | 2026-04-05 04:27:09.690803 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:10.024410 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-05 04:27:13.412369 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:13.412496 | orchestrator | | Field | Value | 2026-04-05 04:27:13.412552 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:13.412586 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 04:27:13.412603 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 04:27:13.412618 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 04:27:13.412634 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-05 04:27:13.412649 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 04:27:13.412664 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 04:27:13.412704 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 04:27:13.412721 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 04:27:13.412737 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 04:27:13.412767 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 04:27:13.412791 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 04:27:13.412808 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 04:27:13.412858 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 04:27:13.412875 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 04:27:13.412890 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 04:27:13.412908 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:09.000000 | 2026-04-05 04:27:13.412938 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 04:27:13.412956 | orchestrator | | accessIPv4 | | 2026-04-05 04:27:13.412987 | orchestrator | | accessIPv6 | | 2026-04-05 04:27:13.413005 | orchestrator | | addresses | test-2=192.168.112.137, 192.168.201.118 | 2026-04-05 04:27:13.413031 | orchestrator | | config_drive | | 2026-04-05 04:27:13.413051 | orchestrator | | created | 2026-04-05T04:24:46Z | 2026-04-05 04:27:13.413068 | orchestrator | | description | None | 2026-04-05 04:27:13.413083 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 04:27:13.413095 | orchestrator | | hostId | 85390d0c9d860f2e0b5bcc02028e5f686a22dac1f2f23e84108d4135 | 2026-04-05 04:27:13.413108 | orchestrator | | host_status | None | 2026-04-05 04:27:13.413136 | orchestrator | | id | 0b1222a5-f3cb-41ab-92be-469ce301e9ad | 2026-04-05 04:27:13.413162 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 04:27:13.413180 | orchestrator | | key_name | test | 2026-04-05 04:27:13.413197 | orchestrator | | locked | False | 2026-04-05 04:27:13.413222 | orchestrator | | locked_reason | None | 2026-04-05 04:27:13.413241 | orchestrator | | name | test-3 | 2026-04-05 04:27:13.413258 | orchestrator | | pinned_availability_zone | None | 2026-04-05 04:27:13.413273 | orchestrator | | progress | 0 | 2026-04-05 04:27:13.413290 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 04:27:13.413307 | orchestrator | | properties | hostname='test-3' | 2026-04-05 04:27:13.413341 | orchestrator | | security_groups | name='icmp' | 2026-04-05 04:27:13.413358 | orchestrator | | | name='ssh' | 2026-04-05 04:27:13.413375 | orchestrator | | server_groups | None | 2026-04-05 04:27:13.413393 | orchestrator | | status | ACTIVE | 2026-04-05 04:27:13.413417 | orchestrator | | tags | test | 2026-04-05 04:27:13.413433 | orchestrator | | trusted_image_certificates | None | 2026-04-05 04:27:13.413449 | orchestrator | | updated | 2026-04-05T04:25:43Z | 2026-04-05 04:27:13.413460 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 04:27:13.413470 | orchestrator | | volumes_attached | delete_on_termination='True', id='2eb5d65b-8149-469b-ac2c-861b74b65c25' | 2026-04-05 04:27:13.426249 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:13.955745 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-05 04:27:17.195976 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:17.197021 | orchestrator | | Field | Value | 2026-04-05 04:27:17.197097 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:17.197133 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 04:27:17.197149 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 04:27:17.197163 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 04:27:17.197176 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-05 04:27:17.197189 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 04:27:17.197202 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 04:27:17.197262 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 04:27:17.197276 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 04:27:17.197287 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 04:27:17.197298 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 04:27:17.197310 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 04:27:17.197322 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 04:27:17.197333 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 04:27:17.197344 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 04:27:17.197356 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 04:27:17.197374 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:11.000000 | 2026-04-05 04:27:17.197393 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 04:27:17.197477 | orchestrator | | accessIPv4 | | 2026-04-05 04:27:17.197498 | orchestrator | | accessIPv6 | | 2026-04-05 04:27:17.197509 | orchestrator | | addresses | test-3=192.168.112.180, 192.168.202.73 | 2026-04-05 04:27:17.197542 | orchestrator | | config_drive | | 2026-04-05 04:27:17.197566 | orchestrator | | created | 2026-04-05T04:24:44Z | 2026-04-05 04:27:17.197578 | orchestrator | | description | None | 2026-04-05 04:27:17.197589 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 04:27:17.197608 | orchestrator | | hostId | 85390d0c9d860f2e0b5bcc02028e5f686a22dac1f2f23e84108d4135 | 2026-04-05 04:27:17.197620 | orchestrator | | host_status | None | 2026-04-05 04:27:17.197642 | orchestrator | | id | ecfbad1d-1672-44a0-9b9d-e8da7a8a2f92 | 2026-04-05 04:27:17.197653 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 04:27:17.197664 | orchestrator | | key_name | test | 2026-04-05 04:27:17.197676 | orchestrator | | locked | False | 2026-04-05 04:27:17.197692 | orchestrator | | locked_reason | None | 2026-04-05 04:27:17.197703 | orchestrator | | name | test-4 | 2026-04-05 04:27:17.197715 | orchestrator | | pinned_availability_zone | None | 2026-04-05 04:27:17.197732 | orchestrator | | progress | 0 | 2026-04-05 04:27:17.197744 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 04:27:17.197755 | orchestrator | | properties | hostname='test-4' | 2026-04-05 04:27:17.197774 | orchestrator | | security_groups | name='icmp' | 2026-04-05 04:27:17.197786 | orchestrator | | | name='ssh' | 2026-04-05 04:27:17.197797 | orchestrator | | server_groups | None | 2026-04-05 04:27:17.197840 | orchestrator | | status | ACTIVE | 2026-04-05 04:27:17.197859 | orchestrator | | tags | test | 2026-04-05 04:27:17.197871 | orchestrator | | trusted_image_certificates | None | 2026-04-05 04:27:17.197882 | orchestrator | | updated | 2026-04-05T04:25:44Z | 2026-04-05 04:27:17.197900 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 04:27:17.197912 | orchestrator | | volumes_attached | delete_on_termination='True', id='9325f864-f7d6-4498-ba97-70dbfcc5f82f' | 2026-04-05 04:27:17.201847 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 04:27:17.753879 | orchestrator | + server_ping 2026-04-05 04:27:17.754611 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 04:27:17.754791 | orchestrator | ++ tr -d '\r' 2026-04-05 04:27:21.024345 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 04:27:21.024437 | orchestrator | + ping -c3 192.168.112.105 2026-04-05 04:27:21.040709 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2026-04-05 04:27:21.040856 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=7.60 ms 2026-04-05 04:27:22.038411 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=3.07 ms 2026-04-05 04:27:23.038111 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.99 ms 2026-04-05 04:27:23.038229 | orchestrator | 2026-04-05 04:27:23.038247 | orchestrator | --- 192.168.112.105 ping statistics --- 2026-04-05 04:27:23.038261 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 04:27:23.038273 | orchestrator | rtt min/avg/max/mdev = 1.993/4.219/7.598/2.428 ms 2026-04-05 04:27:23.040744 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 04:27:23.040784 | orchestrator | + ping -c3 192.168.112.137 2026-04-05 04:27:23.053054 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-04-05 04:27:23.053142 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=5.61 ms 2026-04-05 04:27:24.051025 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.42 ms 2026-04-05 04:27:25.052146 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.77 ms 2026-04-05 04:27:25.052236 | orchestrator | 2026-04-05 04:27:25.052246 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-04-05 04:27:25.052255 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 04:27:25.052263 | orchestrator | rtt min/avg/max/mdev = 1.769/3.267/5.613/1.679 ms 2026-04-05 04:27:25.052959 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 04:27:25.052984 | orchestrator | + ping -c3 192.168.112.170 2026-04-05 04:27:25.066298 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-04-05 04:27:25.066389 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=9.38 ms 2026-04-05 04:27:26.061347 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.33 ms 2026-04-05 04:27:27.063617 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=2.32 ms 2026-04-05 04:27:27.063730 | orchestrator | 2026-04-05 04:27:27.063742 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-04-05 04:27:27.063750 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 04:27:27.063757 | orchestrator | rtt min/avg/max/mdev = 2.319/4.675/9.381/3.327 ms 2026-04-05 04:27:27.063777 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 04:27:27.064692 | orchestrator | + ping -c3 192.168.112.191 2026-04-05 04:27:27.076331 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-04-05 04:27:27.076419 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=6.66 ms 2026-04-05 04:27:28.074943 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.24 ms 2026-04-05 04:27:29.076418 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.97 ms 2026-04-05 04:27:29.076525 | orchestrator | 2026-04-05 04:27:29.076539 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-04-05 04:27:29.076548 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-05 04:27:29.076556 | orchestrator | rtt min/avg/max/mdev = 1.970/3.622/6.655/2.147 ms 2026-04-05 04:27:29.076565 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 04:27:29.076573 | orchestrator | + ping -c3 192.168.112.180 2026-04-05 04:27:29.089298 | orchestrator | PING 192.168.112.180 (192.168.112.180) 56(84) bytes of data. 2026-04-05 04:27:29.089382 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=1 ttl=63 time=8.00 ms 2026-04-05 04:27:30.085804 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=2 ttl=63 time=2.76 ms 2026-04-05 04:27:31.086320 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=3 ttl=63 time=1.59 ms 2026-04-05 04:27:31.086433 | orchestrator | 2026-04-05 04:27:31.086453 | orchestrator | --- 192.168.112.180 ping statistics --- 2026-04-05 04:27:31.086469 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 04:27:31.086483 | orchestrator | rtt min/avg/max/mdev = 1.588/4.115/8.000/2.788 ms 2026-04-05 04:27:31.086908 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-05 04:27:31.515970 | orchestrator | ok: Runtime: 0:11:21.761379 2026-04-05 04:27:31.570435 | 2026-04-05 04:27:31.570586 | TASK [Run tempest] 2026-04-05 04:27:32.104425 | orchestrator | skipping: Conditional result was False 2026-04-05 04:27:32.123633 | 2026-04-05 04:27:32.123802 | TASK [Check prometheus alert status] 2026-04-05 04:27:32.661317 | orchestrator | skipping: Conditional result was False 2026-04-05 04:27:32.675955 | 2026-04-05 04:27:32.676118 | PLAY [Upgrade testbed] 2026-04-05 04:27:32.687185 | 2026-04-05 04:27:32.687313 | TASK [Print next ceph version] 2026-04-05 04:27:32.767271 | orchestrator | ok 2026-04-05 04:27:32.777459 | 2026-04-05 04:27:32.777603 | TASK [Print next openstack version] 2026-04-05 04:27:32.846017 | orchestrator | ok 2026-04-05 04:27:32.857854 | 2026-04-05 04:27:32.858037 | TASK [Print next manager version] 2026-04-05 04:27:32.927304 | orchestrator | ok 2026-04-05 04:27:32.938093 | 2026-04-05 04:27:32.938222 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 04:27:33.006954 | orchestrator | ok 2026-04-05 04:27:33.018802 | 2026-04-05 04:27:33.019022 | TASK [Set cloud fact (local deployment)] 2026-04-05 04:27:33.064367 | orchestrator | skipping: Conditional result was False 2026-04-05 04:27:33.081378 | 2026-04-05 04:27:33.081540 | TASK [Fetch manager address] 2026-04-05 04:27:33.374202 | orchestrator | ok 2026-04-05 04:27:33.384172 | 2026-04-05 04:27:33.384305 | TASK [Set manager_host address] 2026-04-05 04:27:33.472178 | orchestrator | ok 2026-04-05 04:27:33.482237 | 2026-04-05 04:27:33.482355 | TASK [Run upgrade] 2026-04-05 04:27:34.204286 | orchestrator | + set -e 2026-04-05 04:27:34.204417 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-05 04:27:34.204430 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-05 04:27:34.204436 | orchestrator | + CEPH_VERSION=reef 2026-04-05 04:27:34.204442 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-05 04:27:34.204447 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-05 04:27:34.204453 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-04-05 04:27:34.211996 | orchestrator | + set -e 2026-04-05 04:27:34.212106 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:27:34.212130 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:27:34.212154 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:27:34.212164 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:27:34.212174 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:27:34.213177 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-04-05 04:27:34.250937 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-04-05 04:27:34.251483 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-05 04:27:34.292580 | orchestrator | 2026-04-05 04:27:34.292679 | orchestrator | # UPGRADE MANAGER 2026-04-05 04:27:34.292693 | orchestrator | 2026-04-05 04:27:34.292702 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-04-05 04:27:34.292713 | orchestrator | + echo 2026-04-05 04:27:34.292725 | orchestrator | + echo '# UPGRADE MANAGER' 2026-04-05 04:27:34.292735 | orchestrator | + echo 2026-04-05 04:27:34.292745 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-05 04:27:34.292755 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-05 04:27:34.292765 | orchestrator | + CEPH_VERSION=reef 2026-04-05 04:27:34.292776 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-05 04:27:34.292783 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-05 04:27:34.292790 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-05 04:27:34.299882 | orchestrator | + set -e 2026-04-05 04:27:34.299967 | orchestrator | + VERSION=10.0.0 2026-04-05 04:27:34.299982 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:27:34.302453 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-05 04:27:34.302520 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:27:34.305026 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:27:34.307450 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-05 04:27:34.315084 | orchestrator | /opt/configuration ~ 2026-04-05 04:27:34.315159 | orchestrator | + set -e 2026-04-05 04:27:34.315169 | orchestrator | + pushd /opt/configuration 2026-04-05 04:27:34.315177 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 04:27:34.315187 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 04:27:34.316331 | orchestrator | ++ deactivate nondestructive 2026-04-05 04:27:34.316347 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:34.316354 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:34.316395 | orchestrator | ++ hash -r 2026-04-05 04:27:34.316403 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:34.316410 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 04:27:34.316417 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 04:27:34.316424 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 04:27:34.316478 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 04:27:34.316487 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 04:27:34.316495 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 04:27:34.316502 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 04:27:34.316511 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 04:27:34.316519 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 04:27:34.316529 | orchestrator | ++ export PATH 2026-04-05 04:27:34.316537 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:34.316544 | orchestrator | ++ '[' -z '' ']' 2026-04-05 04:27:34.316552 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 04:27:34.316564 | orchestrator | ++ PS1='(venv) ' 2026-04-05 04:27:34.316574 | orchestrator | ++ export PS1 2026-04-05 04:27:34.316581 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 04:27:34.316588 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 04:27:34.316595 | orchestrator | ++ hash -r 2026-04-05 04:27:34.316648 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-05 04:27:35.706299 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-05 04:27:35.708145 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-05 04:27:35.709568 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-05 04:27:35.711262 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-05 04:27:35.712866 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-05 04:27:35.737887 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-05 04:27:35.740368 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-05 04:27:35.742432 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-05 04:27:35.744466 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-05 04:27:35.812256 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-05 04:27:35.815293 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-05 04:27:35.818514 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-05 04:27:35.820618 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-05 04:27:35.828013 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-05 04:27:36.229185 | orchestrator | ++ which gilt 2026-04-05 04:27:36.230554 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-05 04:27:36.230609 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-05 04:27:36.551330 | orchestrator | osism.cfg-generics: 2026-04-05 04:27:36.686983 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-05 04:27:36.688418 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-05 04:27:36.692205 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-05 04:27:36.692261 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-05 04:27:37.768158 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-05 04:27:37.783313 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-05 04:27:38.223015 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-05 04:27:38.290564 | orchestrator | ~ 2026-04-05 04:27:38.290688 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 04:27:38.290714 | orchestrator | + deactivate 2026-04-05 04:27:38.290733 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 04:27:38.290750 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 04:27:38.290765 | orchestrator | + export PATH 2026-04-05 04:27:38.290781 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 04:27:38.290796 | orchestrator | + '[' -n '' ']' 2026-04-05 04:27:38.290857 | orchestrator | + hash -r 2026-04-05 04:27:38.290872 | orchestrator | + '[' -n '' ']' 2026-04-05 04:27:38.290886 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 04:27:38.290899 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 04:27:38.290913 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 04:27:38.290928 | orchestrator | + unset -f deactivate 2026-04-05 04:27:38.290942 | orchestrator | + popd 2026-04-05 04:27:38.292725 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-05 04:27:38.292909 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-05 04:27:38.298530 | orchestrator | + set -e 2026-04-05 04:27:38.298616 | orchestrator | + NAMESPACE=kolla/release 2026-04-05 04:27:38.298635 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-05 04:27:38.308912 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-05 04:27:38.318565 | orchestrator | /opt/configuration ~ 2026-04-05 04:27:38.318693 | orchestrator | + set -e 2026-04-05 04:27:38.318713 | orchestrator | + pushd /opt/configuration 2026-04-05 04:27:38.318722 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 04:27:38.318729 | orchestrator | + source /opt/venv/bin/activate 2026-04-05 04:27:38.318737 | orchestrator | ++ deactivate nondestructive 2026-04-05 04:27:38.318744 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:38.318751 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:38.318759 | orchestrator | ++ hash -r 2026-04-05 04:27:38.318766 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:38.318781 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-05 04:27:38.318800 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-05 04:27:38.318839 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-05 04:27:38.318847 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-05 04:27:38.318854 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-05 04:27:38.318862 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-05 04:27:38.318873 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-05 04:27:38.318882 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 04:27:38.318890 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 04:27:38.318898 | orchestrator | ++ export PATH 2026-04-05 04:27:38.318956 | orchestrator | ++ '[' -n '' ']' 2026-04-05 04:27:38.319127 | orchestrator | ++ '[' -z '' ']' 2026-04-05 04:27:38.319139 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-05 04:27:38.319146 | orchestrator | ++ PS1='(venv) ' 2026-04-05 04:27:38.319154 | orchestrator | ++ export PS1 2026-04-05 04:27:38.319161 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-05 04:27:38.319168 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-05 04:27:38.319175 | orchestrator | ++ hash -r 2026-04-05 04:27:38.319183 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-05 04:27:38.901238 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-05 04:27:38.902500 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-05 04:27:38.903991 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-05 04:27:38.905541 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-05 04:27:38.906742 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-05 04:27:38.918389 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-05 04:27:38.920141 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-05 04:27:38.921463 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-05 04:27:38.923020 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-05 04:27:38.962202 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-05 04:27:38.963866 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-05 04:27:38.966002 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-05 04:27:38.967348 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-05 04:27:38.971471 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-05 04:27:39.252291 | orchestrator | ++ which gilt 2026-04-05 04:27:39.255744 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-05 04:27:39.255799 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-05 04:27:39.447896 | orchestrator | osism.cfg-generics: 2026-04-05 04:27:39.548666 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-05 04:27:39.548837 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-05 04:27:39.549058 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-05 04:27:39.549085 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-05 04:27:40.248425 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-05 04:27:40.264161 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-05 04:27:40.672705 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-05 04:27:40.732688 | orchestrator | ~ 2026-04-05 04:27:40.732767 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-05 04:27:40.732781 | orchestrator | + deactivate 2026-04-05 04:27:40.732794 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-05 04:27:40.732867 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-05 04:27:40.732880 | orchestrator | + export PATH 2026-04-05 04:27:40.732891 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-05 04:27:40.732902 | orchestrator | + '[' -n '' ']' 2026-04-05 04:27:40.732913 | orchestrator | + hash -r 2026-04-05 04:27:40.732923 | orchestrator | + '[' -n '' ']' 2026-04-05 04:27:40.732935 | orchestrator | + unset VIRTUAL_ENV 2026-04-05 04:27:40.732946 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-05 04:27:40.732957 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-05 04:27:40.732968 | orchestrator | + unset -f deactivate 2026-04-05 04:27:40.732979 | orchestrator | + popd 2026-04-05 04:27:40.735061 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-04-05 04:27:40.795986 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 04:27:40.796729 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-05 04:27:40.878921 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 04:27:40.879023 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-05 04:27:40.885045 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-05 04:27:40.890936 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-04-05 04:27:40.954270 | orchestrator | ++ '[' -1 -le 0 ']' 2026-04-05 04:27:40.955434 | orchestrator | +++ semver 10.0.0 10.0.0-0 2026-04-05 04:27:41.050875 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-04-05 04:27:41.050970 | orchestrator | ++ echo true 2026-04-05 04:27:41.051057 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-04-05 04:27:41.052913 | orchestrator | +++ semver 2024.2 2024.2 2026-04-05 04:27:41.118632 | orchestrator | ++ '[' 0 -le 0 ']' 2026-04-05 04:27:41.119162 | orchestrator | +++ semver 2024.2 2025.1 2026-04-05 04:27:41.165577 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-04-05 04:27:41.165669 | orchestrator | ++ echo false 2026-04-05 04:27:41.165737 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-04-05 04:27:41.165912 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 04:27:41.165926 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-04-05 04:27:41.166195 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-04-05 04:27:41.166283 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:27:41.170964 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-04-05 04:27:41.171029 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-04-05 04:27:41.185236 | orchestrator | export RABBITMQ3TO4=true 2026-04-05 04:27:41.188175 | orchestrator | + osism update manager 2026-04-05 04:27:47.812083 | orchestrator | Collecting uv 2026-04-05 04:27:47.934393 | orchestrator | Downloading uv-0.11.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-04-05 04:27:47.957460 | orchestrator | Downloading uv-0.11.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.6 MB) 2026-04-05 04:27:48.856361 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 34.6 MB/s eta 0:00:00 2026-04-05 04:27:48.928939 | orchestrator | Installing collected packages: uv 2026-04-05 04:27:49.451711 | orchestrator | Successfully installed uv-0.11.3 2026-04-05 04:27:50.316269 | orchestrator | Resolved 11 packages in 432ms 2026-04-05 04:27:50.350236 | orchestrator | Downloading ansible (54.5MiB) 2026-04-05 04:27:50.372080 | orchestrator | Downloading cryptography (4.3MiB) 2026-04-05 04:27:50.372168 | orchestrator | Downloading netaddr (2.2MiB) 2026-04-05 04:27:50.372514 | orchestrator | Downloading ansible-core (2.1MiB) 2026-04-05 04:27:50.750004 | orchestrator | Downloaded netaddr 2026-04-05 04:27:50.860984 | orchestrator | Downloaded cryptography 2026-04-05 04:27:50.971275 | orchestrator | Downloaded ansible-core 2026-04-05 04:27:59.464340 | orchestrator | Downloaded ansible 2026-04-05 04:27:59.465047 | orchestrator | Prepared 11 packages in 9.14s 2026-04-05 04:28:00.054965 | orchestrator | Installed 11 packages in 588ms 2026-04-05 04:28:00.055052 | orchestrator | + ansible==11.11.0 2026-04-05 04:28:00.055064 | orchestrator | + ansible-core==2.18.15 2026-04-05 04:28:00.055074 | orchestrator | + cffi==2.0.0 2026-04-05 04:28:00.055083 | orchestrator | + cryptography==46.0.6 2026-04-05 04:28:00.055093 | orchestrator | + jinja2==3.1.6 2026-04-05 04:28:00.055102 | orchestrator | + markupsafe==3.0.3 2026-04-05 04:28:00.055111 | orchestrator | + netaddr==1.3.0 2026-04-05 04:28:00.055119 | orchestrator | + packaging==26.0 2026-04-05 04:28:00.055128 | orchestrator | + pycparser==3.0 2026-04-05 04:28:00.055137 | orchestrator | + pyyaml==6.0.3 2026-04-05 04:28:00.055148 | orchestrator | + resolvelib==1.0.1 2026-04-05 04:28:01.436187 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-20724574y07fsd/tmp_bbz4ac1/ansible-collection-servicesal6ic7u3'... 2026-04-05 04:28:03.048320 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-05 04:28:03.048394 | orchestrator | Already on 'main' 2026-04-05 04:28:03.574356 | orchestrator | Starting galaxy collection install process 2026-04-05 04:28:03.574453 | orchestrator | Process install dependency map 2026-04-05 04:28:03.574472 | orchestrator | Starting collection install process 2026-04-05 04:28:03.574488 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-04-05 04:28:03.574501 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-04-05 04:28:03.574510 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-05 04:28:04.124689 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-2072632x81g0f6/tmpsfrey4pq/ansible-playbooks-managerbaw_c7x_'... 2026-04-05 04:28:04.964476 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-05 04:28:04.964578 | orchestrator | Already on 'main' 2026-04-05 04:28:05.313478 | orchestrator | Starting galaxy collection install process 2026-04-05 04:28:05.313585 | orchestrator | Process install dependency map 2026-04-05 04:28:05.313606 | orchestrator | Starting collection install process 2026-04-05 04:28:05.313624 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-04-05 04:28:05.313640 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-04-05 04:28:05.313649 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-04-05 04:28:06.066961 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-05 04:28:06.067057 | orchestrator | -vvvv to see details 2026-04-05 04:28:06.547154 | orchestrator | 2026-04-05 04:28:06.547248 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-04-05 04:28:06.547260 | orchestrator | 2026-04-05 04:28:06.547290 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-05 04:28:10.842346 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:10.842444 | orchestrator | 2026-04-05 04:28:10.842456 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-05 04:28:10.911913 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 04:28:10.912010 | orchestrator | 2026-04-05 04:28:10.912026 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-05 04:28:12.976158 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:12.976258 | orchestrator | 2026-04-05 04:28:12.976269 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-05 04:28:13.059141 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:13.059229 | orchestrator | 2026-04-05 04:28:13.059242 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-05 04:28:13.138735 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-05 04:28:13.138895 | orchestrator | 2026-04-05 04:28:13.138912 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-05 04:28:17.697713 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-04-05 04:28:17.697837 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-04-05 04:28:17.697847 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-05 04:28:17.697860 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-04-05 04:28:17.697864 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-05 04:28:17.697869 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-05 04:28:17.697873 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-05 04:28:17.697877 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-04-05 04:28:17.697881 | orchestrator | 2026-04-05 04:28:17.697886 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-05 04:28:18.880227 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:18.880315 | orchestrator | 2026-04-05 04:28:18.880327 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-05 04:28:19.899954 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:19.900063 | orchestrator | 2026-04-05 04:28:19.900082 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-05 04:28:20.033257 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-05 04:28:20.033359 | orchestrator | 2026-04-05 04:28:20.033375 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-05 04:28:22.085444 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-04-05 04:28:22.085574 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-04-05 04:28:22.085601 | orchestrator | 2026-04-05 04:28:22.085622 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-05 04:28:23.146349 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:23.146449 | orchestrator | 2026-04-05 04:28:23.146464 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-05 04:28:23.212757 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:28:23.212912 | orchestrator | 2026-04-05 04:28:23.212929 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-05 04:28:23.314982 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-05 04:28:23.315079 | orchestrator | 2026-04-05 04:28:23.315094 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-05 04:28:24.403332 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:24.403498 | orchestrator | 2026-04-05 04:28:24.403527 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-05 04:28:24.484897 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-05 04:28:24.485014 | orchestrator | 2026-04-05 04:28:24.485039 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-05 04:28:26.584405 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-05 04:28:26.584532 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-05 04:28:26.584555 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:26.584574 | orchestrator | 2026-04-05 04:28:26.584594 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-05 04:28:27.649286 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:27.649377 | orchestrator | 2026-04-05 04:28:27.649389 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-05 04:28:27.722223 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:28:27.722310 | orchestrator | 2026-04-05 04:28:27.722320 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-05 04:28:27.824242 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-05 04:28:27.824314 | orchestrator | 2026-04-05 04:28:27.824320 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-05 04:28:28.578361 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:28.578498 | orchestrator | 2026-04-05 04:28:28.578517 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-05 04:28:29.212405 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:29.212481 | orchestrator | 2026-04-05 04:28:29.212503 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-05 04:28:31.267599 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-04-05 04:28:31.267726 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-04-05 04:28:31.267746 | orchestrator | 2026-04-05 04:28:31.267762 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-05 04:28:32.593350 | orchestrator | changed: [testbed-manager] 2026-04-05 04:28:32.593451 | orchestrator | 2026-04-05 04:28:32.593466 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-05 04:28:33.197641 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:33.197719 | orchestrator | 2026-04-05 04:28:33.197726 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-05 04:28:33.788217 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:33.788306 | orchestrator | 2026-04-05 04:28:33.788316 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-05 04:28:33.855410 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:28:33.855509 | orchestrator | 2026-04-05 04:28:33.855524 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-05 04:28:33.948988 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-05 04:28:33.949112 | orchestrator | 2026-04-05 04:28:33.949139 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-05 04:28:34.016058 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:34.016180 | orchestrator | 2026-04-05 04:28:34.016195 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-05 04:28:37.241653 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-04-05 04:28:37.241775 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-04-05 04:28:37.241795 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-04-05 04:28:37.241809 | orchestrator | 2026-04-05 04:28:37.241822 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-05 04:28:38.341075 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:38.341201 | orchestrator | 2026-04-05 04:28:38.341219 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-05 04:28:39.400168 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:39.400309 | orchestrator | 2026-04-05 04:28:39.400336 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-05 04:28:40.503112 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:40.503182 | orchestrator | 2026-04-05 04:28:40.503190 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-05 04:28:40.575527 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-05 04:28:40.575603 | orchestrator | 2026-04-05 04:28:40.575611 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-05 04:28:40.637878 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:40.638000 | orchestrator | 2026-04-05 04:28:40.638072 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-05 04:28:41.699419 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-04-05 04:28:41.699505 | orchestrator | 2026-04-05 04:28:41.699515 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-05 04:28:41.796425 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-05 04:28:41.796512 | orchestrator | 2026-04-05 04:28:41.796524 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-05 04:28:42.866445 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:42.866548 | orchestrator | 2026-04-05 04:28:42.866564 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-05 04:28:44.044616 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:44.044753 | orchestrator | 2026-04-05 04:28:44.044766 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-05 04:28:44.126855 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:28:44.126950 | orchestrator | 2026-04-05 04:28:44.126975 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-05 04:28:44.194207 | orchestrator | ok: [testbed-manager] 2026-04-05 04:28:44.194292 | orchestrator | 2026-04-05 04:28:44.194301 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-05 04:28:45.607961 | orchestrator | changed: [testbed-manager] 2026-04-05 04:28:45.608060 | orchestrator | 2026-04-05 04:28:45.608073 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-05 04:30:01.300578 | orchestrator | changed: [testbed-manager] 2026-04-05 04:30:01.300716 | orchestrator | 2026-04-05 04:30:01.300729 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-05 04:30:02.531034 | orchestrator | ok: [testbed-manager] 2026-04-05 04:30:02.531139 | orchestrator | 2026-04-05 04:30:02.531148 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-05 04:30:02.584905 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:30:02.585036 | orchestrator | 2026-04-05 04:30:02.585047 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-05 04:30:03.492602 | orchestrator | ok: [testbed-manager] 2026-04-05 04:30:03.492694 | orchestrator | 2026-04-05 04:30:03.492706 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-05 04:30:03.572612 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:30:03.572691 | orchestrator | 2026-04-05 04:30:03.572699 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 04:30:03.572706 | orchestrator | 2026-04-05 04:30:03.572712 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-05 04:30:18.745776 | orchestrator | changed: [testbed-manager] 2026-04-05 04:30:18.745855 | orchestrator | 2026-04-05 04:30:18.745863 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-05 04:31:18.808712 | orchestrator | Pausing for 60 seconds 2026-04-05 04:31:18.808823 | orchestrator | changed: [testbed-manager] 2026-04-05 04:31:18.808830 | orchestrator | 2026-04-05 04:31:18.808842 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-04-05 04:31:18.875559 | orchestrator | ok: [testbed-manager] 2026-04-05 04:31:18.875673 | orchestrator | 2026-04-05 04:31:18.875693 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-05 04:31:23.089873 | orchestrator | changed: [testbed-manager] 2026-04-05 04:31:23.089982 | orchestrator | 2026-04-05 04:31:23.089999 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-05 04:32:26.094734 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-05 04:32:26.094872 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-05 04:32:26.094888 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-05 04:32:26.094900 | orchestrator | changed: [testbed-manager] 2026-04-05 04:32:26.094912 | orchestrator | 2026-04-05 04:32:26.094923 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-05 04:32:33.239783 | orchestrator | changed: [testbed-manager] 2026-04-05 04:32:33.239918 | orchestrator | 2026-04-05 04:32:33.239939 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-05 04:32:33.329966 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-05 04:32:33.330237 | orchestrator | 2026-04-05 04:32:33.330255 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-05 04:32:33.330268 | orchestrator | 2026-04-05 04:32:33.330279 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-05 04:32:33.400651 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:32:33.400773 | orchestrator | 2026-04-05 04:32:33.400792 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-05 04:32:33.469221 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-05 04:32:33.469323 | orchestrator | 2026-04-05 04:32:33.469337 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-05 04:32:34.633681 | orchestrator | changed: [testbed-manager] 2026-04-05 04:32:34.633794 | orchestrator | 2026-04-05 04:32:34.633815 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-05 04:32:38.754346 | orchestrator | ok: [testbed-manager] 2026-04-05 04:32:38.754455 | orchestrator | 2026-04-05 04:32:38.754472 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-05 04:32:38.840840 | orchestrator | ok: [testbed-manager] => { 2026-04-05 04:32:38.840926 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-05 04:32:38.840937 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-05 04:32:38.840945 | orchestrator | "Checking running containers against expected versions...", 2026-04-05 04:32:38.840983 | orchestrator | "", 2026-04-05 04:32:38.840992 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-05 04:32:38.841000 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-05 04:32:38.841008 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841016 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-05 04:32:38.841023 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841031 | orchestrator | "", 2026-04-05 04:32:38.841039 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-05 04:32:38.841046 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-05 04:32:38.841054 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841061 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-05 04:32:38.841068 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841075 | orchestrator | "", 2026-04-05 04:32:38.841083 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-05 04:32:38.841090 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-05 04:32:38.841097 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841104 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-05 04:32:38.841112 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841119 | orchestrator | "", 2026-04-05 04:32:38.841126 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-05 04:32:38.841133 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-05 04:32:38.841141 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841148 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-05 04:32:38.841155 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841162 | orchestrator | "", 2026-04-05 04:32:38.841169 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-05 04:32:38.841176 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-05 04:32:38.841183 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841191 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-05 04:32:38.841198 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841205 | orchestrator | "", 2026-04-05 04:32:38.841212 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-05 04:32:38.841219 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841255 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841263 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841270 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841277 | orchestrator | "", 2026-04-05 04:32:38.841285 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-05 04:32:38.841292 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 04:32:38.841299 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841306 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-05 04:32:38.841313 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841320 | orchestrator | "", 2026-04-05 04:32:38.841327 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-05 04:32:38.841334 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 04:32:38.841342 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841349 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-05 04:32:38.841356 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841363 | orchestrator | "", 2026-04-05 04:32:38.841370 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-05 04:32:38.841377 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-05 04:32:38.841384 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841391 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-05 04:32:38.841398 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841405 | orchestrator | "", 2026-04-05 04:32:38.841416 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-05 04:32:38.841423 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 04:32:38.841431 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841438 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-05 04:32:38.841445 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841453 | orchestrator | "", 2026-04-05 04:32:38.841460 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-05 04:32:38.841467 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841474 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841481 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841488 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841495 | orchestrator | "", 2026-04-05 04:32:38.841503 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-05 04:32:38.841510 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841517 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841524 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841532 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841539 | orchestrator | "", 2026-04-05 04:32:38.841546 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-05 04:32:38.841553 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841560 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841567 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841575 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841582 | orchestrator | "", 2026-04-05 04:32:38.841589 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-05 04:32:38.841596 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841603 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841610 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841632 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841639 | orchestrator | "", 2026-04-05 04:32:38.841647 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-05 04:32:38.841654 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841661 | orchestrator | " Enabled: true", 2026-04-05 04:32:38.841674 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-05 04:32:38.841681 | orchestrator | " Status: ✅ MATCH", 2026-04-05 04:32:38.841688 | orchestrator | "", 2026-04-05 04:32:38.841695 | orchestrator | "=== Summary ===", 2026-04-05 04:32:38.841702 | orchestrator | "Errors (version mismatches): 0", 2026-04-05 04:32:38.841710 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-05 04:32:38.841717 | orchestrator | "", 2026-04-05 04:32:38.841724 | orchestrator | "✅ All running containers match expected versions!" 2026-04-05 04:32:38.841731 | orchestrator | ] 2026-04-05 04:32:38.841738 | orchestrator | } 2026-04-05 04:32:38.841746 | orchestrator | 2026-04-05 04:32:38.841753 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-05 04:32:38.895871 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:32:38.896001 | orchestrator | 2026-04-05 04:32:38.896018 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:32:38.896030 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-04-05 04:32:38.896041 | orchestrator | 2026-04-05 04:32:52.136191 | orchestrator | 2026-04-05 04:32:52 | INFO  | Task e5502d7f-b830-4c4d-8bdd-26e63c04b209 (sync inventory) is running in background. Output coming soon. 2026-04-05 04:33:26.964598 | orchestrator | 2026-04-05 04:32:53 | INFO  | Starting group_vars file reorganization 2026-04-05 04:33:26.964717 | orchestrator | 2026-04-05 04:32:53 | INFO  | Moved 0 file(s) to their respective directories 2026-04-05 04:33:26.964733 | orchestrator | 2026-04-05 04:32:53 | INFO  | Group_vars file reorganization completed 2026-04-05 04:33:26.964746 | orchestrator | 2026-04-05 04:32:56 | INFO  | Starting variable preparation from inventory 2026-04-05 04:33:26.964758 | orchestrator | 2026-04-05 04:33:00 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-05 04:33:26.964769 | orchestrator | 2026-04-05 04:33:00 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-05 04:33:26.964780 | orchestrator | 2026-04-05 04:33:00 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-05 04:33:26.964791 | orchestrator | 2026-04-05 04:33:00 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-05 04:33:26.964802 | orchestrator | 2026-04-05 04:33:00 | INFO  | Variable preparation completed 2026-04-05 04:33:26.964813 | orchestrator | 2026-04-05 04:33:01 | INFO  | Starting inventory overwrite handling 2026-04-05 04:33:26.964824 | orchestrator | 2026-04-05 04:33:01 | INFO  | Handling group overwrites in 99-overwrite 2026-04-05 04:33:26.964874 | orchestrator | 2026-04-05 04:33:01 | INFO  | Removing group frr:children from 60-generic 2026-04-05 04:33:26.964886 | orchestrator | 2026-04-05 04:33:01 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-05 04:33:26.964897 | orchestrator | 2026-04-05 04:33:01 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-05 04:33:26.964908 | orchestrator | 2026-04-05 04:33:01 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-05 04:33:26.964919 | orchestrator | 2026-04-05 04:33:01 | INFO  | Handling group overwrites in 20-roles 2026-04-05 04:33:26.964930 | orchestrator | 2026-04-05 04:33:01 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-05 04:33:26.964941 | orchestrator | 2026-04-05 04:33:01 | INFO  | Removed 5 group(s) in total 2026-04-05 04:33:26.964952 | orchestrator | 2026-04-05 04:33:01 | INFO  | Inventory overwrite handling completed 2026-04-05 04:33:26.964963 | orchestrator | 2026-04-05 04:33:03 | INFO  | Starting merge of inventory files 2026-04-05 04:33:26.964974 | orchestrator | 2026-04-05 04:33:03 | INFO  | Inventory files merged successfully 2026-04-05 04:33:26.964985 | orchestrator | 2026-04-05 04:33:08 | INFO  | Generating minified hosts file 2026-04-05 04:33:26.965052 | orchestrator | 2026-04-05 04:33:10 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-05 04:33:26.965077 | orchestrator | 2026-04-05 04:33:10 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-05 04:33:26.965089 | orchestrator | 2026-04-05 04:33:12 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-05 04:33:26.965100 | orchestrator | 2026-04-05 04:33:25 | INFO  | Successfully wrote ClusterShell configuration 2026-04-05 04:33:27.197997 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 04:33:27.198175 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-05 04:33:27.198194 | orchestrator | + local max_attempts=60 2026-04-05 04:33:27.198207 | orchestrator | + local name=kolla-ansible 2026-04-05 04:33:27.198221 | orchestrator | + local attempt_num=1 2026-04-05 04:33:27.198504 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-05 04:33:27.238336 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 04:33:27.238432 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-05 04:33:27.238452 | orchestrator | + local max_attempts=60 2026-04-05 04:33:27.238472 | orchestrator | + local name=osism-ansible 2026-04-05 04:33:27.238493 | orchestrator | + local attempt_num=1 2026-04-05 04:33:27.238920 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-05 04:33:27.278663 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-05 04:33:27.278760 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-05 04:33:27.492770 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-05 04:33:27.492961 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-05 04:33:27.492980 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-05 04:33:27.492990 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-05 04:33:27.493018 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 3 hours ago Up 2 minutes (healthy) 8000/tcp 2026-04-05 04:33:27.493027 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-04-05 04:33:27.493036 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-04-05 04:33:27.493044 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-04-05 04:33:27.493053 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 37 seconds ago 2026-04-05 04:33:27.493061 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 3 hours ago Up 3 minutes (healthy) 3306/tcp 2026-04-05 04:33:27.493070 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-04-05 04:33:27.493105 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 3 hours ago Up 3 minutes (healthy) 6379/tcp 2026-04-05 04:33:27.493120 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-05 04:33:27.493134 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-04-05 04:33:27.493149 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-04-05 04:33:27.493257 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-04-05 04:33:27.498285 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-04-05 04:33:27.498364 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-04-05 04:33:27.498377 | orchestrator | + osism apply facts 2026-04-05 04:33:39.013242 | orchestrator | 2026-04-05 04:33:39 | INFO  | Prepare task for execution of facts. 2026-04-05 04:33:39.107268 | orchestrator | 2026-04-05 04:33:39 | INFO  | Task 784895f1-1941-4e84-a09a-4ad2226b9d4d (facts) was prepared for execution. 2026-04-05 04:33:39.107340 | orchestrator | 2026-04-05 04:33:39 | INFO  | It takes a moment until task 784895f1-1941-4e84-a09a-4ad2226b9d4d (facts) has been started and output is visible here. 2026-04-05 04:33:59.794847 | orchestrator | 2026-04-05 04:33:59.794962 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 04:33:59.794980 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 04:33:59.794992 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 04:33:59.795004 | orchestrator | 2026-04-05 04:33:59.795010 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 04:33:59.795016 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 04:33:59.795022 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 04:33:59.795033 | orchestrator | Sunday 05 April 2026 04:33:45 +0000 (0:00:02.090) 0:00:02.090 ********** 2026-04-05 04:33:59.795039 | orchestrator | ok: [testbed-manager] 2026-04-05 04:33:59.795046 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:33:59.795051 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:33:59.795057 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:33:59.795062 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:33:59.795068 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:33:59.795073 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:33:59.795078 | orchestrator | 2026-04-05 04:33:59.795084 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 04:33:59.795090 | orchestrator | Sunday 05 April 2026 04:33:47 +0000 (0:00:02.423) 0:00:04.513 ********** 2026-04-05 04:33:59.795095 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:33:59.795101 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:33:59.795106 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:33:59.795112 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:33:59.795117 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:33:59.795123 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:33:59.795129 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:33:59.795134 | orchestrator | 2026-04-05 04:33:59.795139 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 04:33:59.795169 | orchestrator | 2026-04-05 04:33:59.795175 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 04:33:59.795180 | orchestrator | Sunday 05 April 2026 04:33:49 +0000 (0:00:02.467) 0:00:06.980 ********** 2026-04-05 04:33:59.795186 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:33:59.795191 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:33:59.795196 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:33:59.795202 | orchestrator | ok: [testbed-manager] 2026-04-05 04:33:59.795207 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:33:59.795215 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:33:59.795224 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:33:59.795232 | orchestrator | 2026-04-05 04:33:59.795241 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 04:33:59.795249 | orchestrator | 2026-04-05 04:33:59.795258 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 04:33:59.795267 | orchestrator | Sunday 05 April 2026 04:33:57 +0000 (0:00:07.555) 0:00:14.536 ********** 2026-04-05 04:33:59.795276 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:33:59.795285 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:33:59.795294 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:33:59.795303 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:33:59.795312 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:33:59.795321 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:33:59.795327 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:33:59.795334 | orchestrator | 2026-04-05 04:33:59.795343 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:33:59.795356 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:33:59.795371 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:33:59.795379 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:33:59.795388 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:33:59.795397 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:33:59.795406 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:33:59.795415 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 04:33:59.795423 | orchestrator | 2026-04-05 04:33:59.795431 | orchestrator | 2026-04-05 04:33:59.795440 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:33:59.795447 | orchestrator | Sunday 05 April 2026 04:33:59 +0000 (0:00:01.883) 0:00:16.419 ********** 2026-04-05 04:33:59.795455 | orchestrator | =============================================================================== 2026-04-05 04:33:59.795481 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.55s 2026-04-05 04:33:59.795490 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.47s 2026-04-05 04:33:59.795498 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.42s 2026-04-05 04:33:59.795531 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.88s 2026-04-05 04:34:00.027290 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-05 04:34:00.097729 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 04:34:00.097912 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-05 04:34:00.133004 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-04-05 04:34:00.133134 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-04-05 04:34:00.137704 | orchestrator | + set -e 2026-04-05 04:34:00.137831 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-04-05 04:34:00.137858 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-05 04:34:00.145353 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-04-05 04:34:00.151600 | orchestrator | 2026-04-05 04:34:00.151679 | orchestrator | # UPGRADE SERVICES 2026-04-05 04:34:00.151704 | orchestrator | 2026-04-05 04:34:00.151712 | orchestrator | + set -e 2026-04-05 04:34:00.151730 | orchestrator | + echo 2026-04-05 04:34:00.151738 | orchestrator | + echo '# UPGRADE SERVICES' 2026-04-05 04:34:00.151746 | orchestrator | + echo 2026-04-05 04:34:00.151776 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 04:34:00.152403 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 04:34:00.152430 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 04:34:00.152440 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 04:34:00.152449 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 04:34:00.152458 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 04:34:00.152469 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 04:34:00.152479 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:34:00.152494 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:34:00.152509 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 04:34:00.152524 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 04:34:00.152538 | orchestrator | ++ export ARA=false 2026-04-05 04:34:00.152553 | orchestrator | ++ ARA=false 2026-04-05 04:34:00.152569 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 04:34:00.152582 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 04:34:00.152598 | orchestrator | ++ export TEMPEST=false 2026-04-05 04:34:00.152612 | orchestrator | ++ TEMPEST=false 2026-04-05 04:34:00.152626 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 04:34:00.152642 | orchestrator | ++ IS_ZUUL=true 2026-04-05 04:34:00.152656 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:34:00.152671 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:34:00.152686 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 04:34:00.152700 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 04:34:00.152715 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 04:34:00.152730 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 04:34:00.152745 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 04:34:00.152833 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 04:34:00.152848 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 04:34:00.152863 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 04:34:00.152877 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-05 04:34:00.152890 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-05 04:34:00.152906 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-04-05 04:34:00.152920 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-04-05 04:34:00.152936 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-05 04:34:00.160585 | orchestrator | + set -e 2026-04-05 04:34:00.160664 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:34:00.161398 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:34:00.161425 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:34:00.161436 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:34:00.161447 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:34:00.161458 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 04:34:00.161468 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 04:34:00.161479 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 04:34:00.161490 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 04:34:00.161501 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 04:34:00.161512 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 04:34:00.161523 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 04:34:00.161534 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 04:34:00.161544 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 04:34:00.161556 | orchestrator | 2026-04-05 04:34:00.161567 | orchestrator | # PULL IMAGES 2026-04-05 04:34:00.161577 | orchestrator | 2026-04-05 04:34:00.161588 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 04:34:00.161599 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 04:34:00.161610 | orchestrator | ++ export ARA=false 2026-04-05 04:34:00.161623 | orchestrator | ++ ARA=false 2026-04-05 04:34:00.161634 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 04:34:00.161644 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 04:34:00.161655 | orchestrator | ++ export TEMPEST=false 2026-04-05 04:34:00.161665 | orchestrator | ++ TEMPEST=false 2026-04-05 04:34:00.161676 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 04:34:00.161687 | orchestrator | ++ IS_ZUUL=true 2026-04-05 04:34:00.161727 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:34:00.161739 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 04:34:00.161749 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 04:34:00.161810 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 04:34:00.161820 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 04:34:00.161831 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 04:34:00.161842 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 04:34:00.161853 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 04:34:00.161864 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 04:34:00.161875 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 04:34:00.161885 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-05 04:34:00.161896 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-05 04:34:00.161907 | orchestrator | + echo 2026-04-05 04:34:00.161918 | orchestrator | + echo '# PULL IMAGES' 2026-04-05 04:34:00.161929 | orchestrator | + echo 2026-04-05 04:34:00.162255 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-05 04:34:00.218268 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 04:34:00.218360 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-05 04:34:01.594551 | orchestrator | 2026-04-05 04:34:01 | INFO  | Trying to run play pull-images in environment custom 2026-04-05 04:34:11.776393 | orchestrator | 2026-04-05 04:34:11 | INFO  | Prepare task for execution of pull-images. 2026-04-05 04:34:11.871189 | orchestrator | 2026-04-05 04:34:11 | INFO  | Task 3377ddfa-400e-4901-8515-e6d033fa7e74 (pull-images) was prepared for execution. 2026-04-05 04:34:11.871292 | orchestrator | 2026-04-05 04:34:11 | INFO  | Task 3377ddfa-400e-4901-8515-e6d033fa7e74 is running in background. No more output. Check ARA for logs. 2026-04-05 04:34:12.126806 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-04-05 04:34:12.138296 | orchestrator | + set -e 2026-04-05 04:34:12.138381 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:34:12.138394 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:34:12.138404 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:34:12.138413 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:34:12.138421 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:34:12.138429 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 04:34:12.140890 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 04:34:12.146254 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 04:34:12.146315 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 04:34:12.147279 | orchestrator | ++ semver 10.0.0 8.0.3 2026-04-05 04:34:12.194818 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 04:34:12.194893 | orchestrator | + osism apply frr 2026-04-05 04:34:23.701041 | orchestrator | 2026-04-05 04:34:23 | INFO  | Prepare task for execution of frr. 2026-04-05 04:34:23.792306 | orchestrator | 2026-04-05 04:34:23 | INFO  | Task 155a25bc-2036-4a40-96eb-2e3c74bd4a9e (frr) was prepared for execution. 2026-04-05 04:34:23.792398 | orchestrator | 2026-04-05 04:34:23 | INFO  | It takes a moment until task 155a25bc-2036-4a40-96eb-2e3c74bd4a9e (frr) has been started and output is visible here. 2026-04-05 04:35:02.375995 | orchestrator | 2026-04-05 04:35:02.376103 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-05 04:35:02.376114 | orchestrator | 2026-04-05 04:35:02.376120 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-05 04:35:02.376126 | orchestrator | Sunday 05 April 2026 04:34:31 +0000 (0:00:03.891) 0:00:03.891 ********** 2026-04-05 04:35:02.376140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 04:35:02.376148 | orchestrator | 2026-04-05 04:35:02.376153 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-05 04:35:02.376159 | orchestrator | Sunday 05 April 2026 04:34:35 +0000 (0:00:03.622) 0:00:07.513 ********** 2026-04-05 04:35:02.376165 | orchestrator | ok: [testbed-manager] 2026-04-05 04:35:02.376171 | orchestrator | 2026-04-05 04:35:02.376176 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-05 04:35:02.376198 | orchestrator | Sunday 05 April 2026 04:34:37 +0000 (0:00:02.597) 0:00:10.110 ********** 2026-04-05 04:35:02.376204 | orchestrator | ok: [testbed-manager] 2026-04-05 04:35:02.376209 | orchestrator | 2026-04-05 04:35:02.376214 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-05 04:35:02.376219 | orchestrator | Sunday 05 April 2026 04:34:40 +0000 (0:00:03.016) 0:00:13.127 ********** 2026-04-05 04:35:02.376224 | orchestrator | ok: [testbed-manager] 2026-04-05 04:35:02.376229 | orchestrator | 2026-04-05 04:35:02.376237 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-05 04:35:02.376242 | orchestrator | Sunday 05 April 2026 04:34:42 +0000 (0:00:01.963) 0:00:15.091 ********** 2026-04-05 04:35:02.376247 | orchestrator | ok: [testbed-manager] 2026-04-05 04:35:02.376252 | orchestrator | 2026-04-05 04:35:02.376257 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-05 04:35:02.376262 | orchestrator | Sunday 05 April 2026 04:34:44 +0000 (0:00:01.917) 0:00:17.008 ********** 2026-04-05 04:35:02.376267 | orchestrator | ok: [testbed-manager] 2026-04-05 04:35:02.376272 | orchestrator | 2026-04-05 04:35:02.376277 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-05 04:35:02.376282 | orchestrator | Sunday 05 April 2026 04:34:47 +0000 (0:00:02.645) 0:00:19.654 ********** 2026-04-05 04:35:02.376288 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:35:02.376294 | orchestrator | 2026-04-05 04:35:02.376299 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-05 04:35:02.376304 | orchestrator | Sunday 05 April 2026 04:34:48 +0000 (0:00:01.195) 0:00:20.849 ********** 2026-04-05 04:35:02.376309 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:35:02.376314 | orchestrator | 2026-04-05 04:35:02.376319 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-05 04:35:02.376324 | orchestrator | Sunday 05 April 2026 04:34:49 +0000 (0:00:01.156) 0:00:22.006 ********** 2026-04-05 04:35:02.376329 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:35:02.376335 | orchestrator | 2026-04-05 04:35:02.376342 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-05 04:35:02.376351 | orchestrator | Sunday 05 April 2026 04:34:50 +0000 (0:00:01.234) 0:00:23.240 ********** 2026-04-05 04:35:02.376362 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:35:02.376375 | orchestrator | 2026-04-05 04:35:02.376383 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-05 04:35:02.376391 | orchestrator | Sunday 05 April 2026 04:34:52 +0000 (0:00:01.196) 0:00:24.437 ********** 2026-04-05 04:35:02.376398 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:35:02.376407 | orchestrator | 2026-04-05 04:35:02.376415 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-05 04:35:02.376423 | orchestrator | Sunday 05 April 2026 04:34:53 +0000 (0:00:01.299) 0:00:25.737 ********** 2026-04-05 04:35:02.376430 | orchestrator | ok: [testbed-manager] 2026-04-05 04:35:02.376438 | orchestrator | 2026-04-05 04:35:02.376446 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-05 04:35:02.376454 | orchestrator | Sunday 05 April 2026 04:34:55 +0000 (0:00:02.334) 0:00:28.071 ********** 2026-04-05 04:35:02.376463 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-05 04:35:02.376472 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-05 04:35:02.376481 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-05 04:35:02.376490 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-05 04:35:02.376498 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-05 04:35:02.376507 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-05 04:35:02.376513 | orchestrator | 2026-04-05 04:35:02.376518 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-05 04:35:02.376530 | orchestrator | Sunday 05 April 2026 04:34:59 +0000 (0:00:03.538) 0:00:31.610 ********** 2026-04-05 04:35:02.376536 | orchestrator | ok: [testbed-manager] 2026-04-05 04:35:02.376542 | orchestrator | 2026-04-05 04:35:02.376548 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:35:02.376555 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 04:35:02.376562 | orchestrator | 2026-04-05 04:35:02.376567 | orchestrator | 2026-04-05 04:35:02.376574 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:35:02.376580 | orchestrator | Sunday 05 April 2026 04:35:02 +0000 (0:00:02.678) 0:00:34.289 ********** 2026-04-05 04:35:02.376586 | orchestrator | =============================================================================== 2026-04-05 04:35:02.376604 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 3.62s 2026-04-05 04:35:02.376637 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.54s 2026-04-05 04:35:02.376648 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.02s 2026-04-05 04:35:02.376654 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.68s 2026-04-05 04:35:02.376661 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.65s 2026-04-05 04:35:02.376667 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.60s 2026-04-05 04:35:02.376673 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.33s 2026-04-05 04:35:02.376679 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.96s 2026-04-05 04:35:02.376685 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.92s 2026-04-05 04:35:02.376691 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.30s 2026-04-05 04:35:02.376698 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 1.23s 2026-04-05 04:35:02.376704 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.20s 2026-04-05 04:35:02.376710 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 1.20s 2026-04-05 04:35:02.376717 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 1.16s 2026-04-05 04:35:02.604503 | orchestrator | + osism apply kubernetes 2026-04-05 04:35:04.073136 | orchestrator | 2026-04-05 04:35:04 | INFO  | Prepare task for execution of kubernetes. 2026-04-05 04:35:04.141962 | orchestrator | 2026-04-05 04:35:04 | INFO  | Task 6f0c1af1-a14f-4f00-a39d-76e9b3863152 (kubernetes) was prepared for execution. 2026-04-05 04:35:04.142125 | orchestrator | 2026-04-05 04:35:04 | INFO  | It takes a moment until task 6f0c1af1-a14f-4f00-a39d-76e9b3863152 (kubernetes) has been started and output is visible here. 2026-04-05 04:35:48.954972 | orchestrator | 2026-04-05 04:35:48.955092 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-05 04:35:48.955109 | orchestrator | 2026-04-05 04:35:48.955123 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-05 04:35:48.955135 | orchestrator | Sunday 05 April 2026 04:35:10 +0000 (0:00:02.331) 0:00:02.331 ********** 2026-04-05 04:35:48.955146 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:35:48.955158 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:35:48.955168 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:35:48.955179 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:35:48.955190 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:35:48.955201 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:35:48.955211 | orchestrator | 2026-04-05 04:35:48.955222 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-05 04:35:48.955233 | orchestrator | Sunday 05 April 2026 04:35:14 +0000 (0:00:03.838) 0:00:06.169 ********** 2026-04-05 04:35:48.955244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.955281 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.955293 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.955304 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.955314 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.955325 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.955336 | orchestrator | 2026-04-05 04:35:48.955347 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-05 04:35:48.955358 | orchestrator | Sunday 05 April 2026 04:35:16 +0000 (0:00:02.093) 0:00:08.263 ********** 2026-04-05 04:35:48.955368 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.955379 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.955389 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.955400 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.955410 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.955421 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.955432 | orchestrator | 2026-04-05 04:35:48.955442 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-05 04:35:48.955453 | orchestrator | Sunday 05 April 2026 04:35:17 +0000 (0:00:01.828) 0:00:10.091 ********** 2026-04-05 04:35:48.955464 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:35:48.955475 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:35:48.955488 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:35:48.955501 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:35:48.955537 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:35:48.955550 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:35:48.955563 | orchestrator | 2026-04-05 04:35:48.955576 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-05 04:35:48.955589 | orchestrator | Sunday 05 April 2026 04:35:20 +0000 (0:00:02.680) 0:00:12.771 ********** 2026-04-05 04:35:48.955601 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:35:48.955613 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:35:48.955626 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:35:48.955638 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:35:48.955651 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:35:48.955663 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:35:48.955676 | orchestrator | 2026-04-05 04:35:48.955688 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-05 04:35:48.955702 | orchestrator | Sunday 05 April 2026 04:35:22 +0000 (0:00:02.249) 0:00:15.021 ********** 2026-04-05 04:35:48.955714 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:35:48.955726 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:35:48.955738 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:35:48.955750 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:35:48.955763 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:35:48.955775 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:35:48.955787 | orchestrator | 2026-04-05 04:35:48.955800 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-05 04:35:48.955813 | orchestrator | Sunday 05 April 2026 04:35:25 +0000 (0:00:02.642) 0:00:17.663 ********** 2026-04-05 04:35:48.955825 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.955838 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.955850 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.955860 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.955871 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.955882 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.955892 | orchestrator | 2026-04-05 04:35:48.955903 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-05 04:35:48.955914 | orchestrator | Sunday 05 April 2026 04:35:27 +0000 (0:00:02.014) 0:00:19.678 ********** 2026-04-05 04:35:48.955925 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.955935 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.955946 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.955956 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.955967 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.955985 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.955996 | orchestrator | 2026-04-05 04:35:48.956007 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-05 04:35:48.956018 | orchestrator | Sunday 05 April 2026 04:35:30 +0000 (0:00:03.058) 0:00:22.736 ********** 2026-04-05 04:35:48.956029 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 04:35:48.956040 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 04:35:48.956051 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.956062 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 04:35:48.956073 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 04:35:48.956084 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.956094 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 04:35:48.956105 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 04:35:48.956116 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.956127 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 04:35:48.956137 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 04:35:48.956148 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.956177 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 04:35:48.956189 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 04:35:48.956199 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.956210 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 04:35:48.956221 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 04:35:48.956231 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.956242 | orchestrator | 2026-04-05 04:35:48.956252 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-05 04:35:48.956263 | orchestrator | Sunday 05 April 2026 04:35:32 +0000 (0:00:02.250) 0:00:24.986 ********** 2026-04-05 04:35:48.956273 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.956284 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.956295 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.956305 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.956316 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.956326 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.956337 | orchestrator | 2026-04-05 04:35:48.956347 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-05 04:35:48.956359 | orchestrator | Sunday 05 April 2026 04:35:35 +0000 (0:00:02.435) 0:00:27.422 ********** 2026-04-05 04:35:48.956370 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:35:48.956380 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:35:48.956391 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:35:48.956402 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:35:48.956412 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:35:48.956425 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:35:48.956445 | orchestrator | 2026-04-05 04:35:48.956465 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-05 04:35:48.956484 | orchestrator | Sunday 05 April 2026 04:35:37 +0000 (0:00:01.927) 0:00:29.349 ********** 2026-04-05 04:35:48.956503 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:35:48.956559 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:35:48.956572 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:35:48.956582 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:35:48.956593 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:35:48.956603 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:35:48.956619 | orchestrator | 2026-04-05 04:35:48.956630 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-05 04:35:48.956649 | orchestrator | Sunday 05 April 2026 04:35:40 +0000 (0:00:02.807) 0:00:32.157 ********** 2026-04-05 04:35:48.956660 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.956683 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.956694 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.956705 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.956716 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.956726 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.956737 | orchestrator | 2026-04-05 04:35:48.956747 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-05 04:35:48.956758 | orchestrator | Sunday 05 April 2026 04:35:42 +0000 (0:00:02.016) 0:00:34.173 ********** 2026-04-05 04:35:48.956769 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.956779 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.956790 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.956800 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.956811 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.956821 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.956832 | orchestrator | 2026-04-05 04:35:48.956843 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-05 04:35:48.956855 | orchestrator | Sunday 05 April 2026 04:35:44 +0000 (0:00:02.194) 0:00:36.367 ********** 2026-04-05 04:35:48.956866 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.956876 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.956887 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.956897 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.956908 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.956918 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.956929 | orchestrator | 2026-04-05 04:35:48.956940 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-05 04:35:48.956951 | orchestrator | Sunday 05 April 2026 04:35:46 +0000 (0:00:02.135) 0:00:38.503 ********** 2026-04-05 04:35:48.956962 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-05 04:35:48.956973 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-05 04:35:48.956983 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.956994 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-05 04:35:48.957005 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-05 04:35:48.957015 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.957026 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-05 04:35:48.957037 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-05 04:35:48.957047 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:35:48.957058 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-05 04:35:48.957068 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-05 04:35:48.957079 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:35:48.957094 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-05 04:35:48.957105 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-05 04:35:48.957116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:35:48.957126 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-05 04:35:48.957137 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-05 04:35:48.957148 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:35:48.957158 | orchestrator | 2026-04-05 04:35:48.957169 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-05 04:35:48.957180 | orchestrator | Sunday 05 April 2026 04:35:48 +0000 (0:00:01.945) 0:00:40.448 ********** 2026-04-05 04:35:48.957191 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:35:48.957202 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:35:48.957221 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:37:37.806393 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:37:37.806505 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.806521 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.806534 | orchestrator | 2026-04-05 04:37:37.806549 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-05 04:37:37.806562 | orchestrator | Sunday 05 April 2026 04:35:50 +0000 (0:00:02.234) 0:00:42.682 ********** 2026-04-05 04:37:37.806574 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:37:37.806586 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:37:37.806598 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:37:37.806609 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:37:37.806620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.806632 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.806643 | orchestrator | 2026-04-05 04:37:37.806655 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-05 04:37:37.806666 | orchestrator | 2026-04-05 04:37:37.806678 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-05 04:37:37.806690 | orchestrator | Sunday 05 April 2026 04:35:53 +0000 (0:00:03.164) 0:00:45.847 ********** 2026-04-05 04:37:37.806702 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.806715 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.806726 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.806738 | orchestrator | 2026-04-05 04:37:37.806749 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-05 04:37:37.806761 | orchestrator | Sunday 05 April 2026 04:35:57 +0000 (0:00:04.167) 0:00:50.014 ********** 2026-04-05 04:37:37.806772 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.806784 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.806796 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.806807 | orchestrator | 2026-04-05 04:37:37.806819 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-05 04:37:37.806831 | orchestrator | Sunday 05 April 2026 04:36:00 +0000 (0:00:02.708) 0:00:52.723 ********** 2026-04-05 04:37:37.806843 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:37:37.806855 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:37:37.806866 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:37:37.806877 | orchestrator | 2026-04-05 04:37:37.806889 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-05 04:37:37.806902 | orchestrator | Sunday 05 April 2026 04:36:02 +0000 (0:00:02.164) 0:00:54.888 ********** 2026-04-05 04:37:37.806914 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.806928 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.806940 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.806951 | orchestrator | 2026-04-05 04:37:37.806964 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-05 04:37:37.806976 | orchestrator | Sunday 05 April 2026 04:36:04 +0000 (0:00:01.681) 0:00:56.570 ********** 2026-04-05 04:37:37.806989 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:37:37.807002 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.807015 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.807028 | orchestrator | 2026-04-05 04:37:37.807040 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-05 04:37:37.807051 | orchestrator | Sunday 05 April 2026 04:36:05 +0000 (0:00:01.460) 0:00:58.030 ********** 2026-04-05 04:37:37.807062 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.807072 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.807082 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.807092 | orchestrator | 2026-04-05 04:37:37.807104 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-05 04:37:37.807116 | orchestrator | Sunday 05 April 2026 04:36:07 +0000 (0:00:02.019) 0:01:00.050 ********** 2026-04-05 04:37:37.807128 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.807139 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.807150 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.807187 | orchestrator | 2026-04-05 04:37:37.807200 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-05 04:37:37.807211 | orchestrator | Sunday 05 April 2026 04:36:10 +0000 (0:00:02.321) 0:01:02.371 ********** 2026-04-05 04:37:37.807223 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:37:37.807235 | orchestrator | 2026-04-05 04:37:37.807246 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-05 04:37:37.807256 | orchestrator | Sunday 05 April 2026 04:36:12 +0000 (0:00:01.810) 0:01:04.181 ********** 2026-04-05 04:37:37.807267 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.807278 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.807290 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.807320 | orchestrator | 2026-04-05 04:37:37.807333 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-05 04:37:37.807344 | orchestrator | Sunday 05 April 2026 04:36:15 +0000 (0:00:02.930) 0:01:07.112 ********** 2026-04-05 04:37:37.807355 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.807367 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.807379 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.807390 | orchestrator | 2026-04-05 04:37:37.807443 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-05 04:37:37.807456 | orchestrator | Sunday 05 April 2026 04:36:16 +0000 (0:00:01.610) 0:01:08.723 ********** 2026-04-05 04:37:37.807467 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.807477 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.807488 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:37:37.807498 | orchestrator | 2026-04-05 04:37:37.807508 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-05 04:37:37.807520 | orchestrator | Sunday 05 April 2026 04:36:18 +0000 (0:00:01.878) 0:01:10.602 ********** 2026-04-05 04:37:37.807533 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.807545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.807558 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:37:37.807569 | orchestrator | 2026-04-05 04:37:37.807581 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-05 04:37:37.807591 | orchestrator | Sunday 05 April 2026 04:36:20 +0000 (0:00:02.454) 0:01:13.056 ********** 2026-04-05 04:37:37.807601 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:37:37.807613 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.807640 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.807651 | orchestrator | 2026-04-05 04:37:37.807662 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-05 04:37:37.807690 | orchestrator | Sunday 05 April 2026 04:36:22 +0000 (0:00:01.438) 0:01:14.495 ********** 2026-04-05 04:37:37.807701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:37:37.807713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.807726 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.807737 | orchestrator | 2026-04-05 04:37:37.807749 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-05 04:37:37.807767 | orchestrator | Sunday 05 April 2026 04:36:23 +0000 (0:00:01.513) 0:01:16.009 ********** 2026-04-05 04:37:37.807778 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:37:37.807788 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:37:37.807799 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:37:37.807810 | orchestrator | 2026-04-05 04:37:37.807820 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-05 04:37:37.807830 | orchestrator | Sunday 05 April 2026 04:36:26 +0000 (0:00:02.414) 0:01:18.424 ********** 2026-04-05 04:37:37.807840 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.807851 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.807861 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.807871 | orchestrator | 2026-04-05 04:37:37.807883 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-05 04:37:37.807909 | orchestrator | Sunday 05 April 2026 04:36:28 +0000 (0:00:02.249) 0:01:20.673 ********** 2026-04-05 04:37:37.807922 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.807934 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.807945 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.807957 | orchestrator | 2026-04-05 04:37:37.807969 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-05 04:37:37.807982 | orchestrator | Sunday 05 April 2026 04:36:30 +0000 (0:00:01.478) 0:01:22.152 ********** 2026-04-05 04:37:37.807994 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 04:37:37.808008 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 04:37:37.808020 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-05 04:37:37.808031 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 04:37:37.808043 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 04:37:37.808055 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-05 04:37:37.808067 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 04:37:37.808079 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 04:37:37.808091 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-05 04:37:37.808102 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.808112 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.808124 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.808135 | orchestrator | 2026-04-05 04:37:37.808147 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-05 04:37:37.808159 | orchestrator | Sunday 05 April 2026 04:37:03 +0000 (0:00:33.711) 0:01:55.864 ********** 2026-04-05 04:37:37.808170 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:37:37.808182 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:37:37.808194 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:37:37.808206 | orchestrator | 2026-04-05 04:37:37.808218 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-05 04:37:37.808228 | orchestrator | Sunday 05 April 2026 04:37:05 +0000 (0:00:01.439) 0:01:57.303 ********** 2026-04-05 04:37:37.808240 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:37:37.808251 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:37:37.808263 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:37:37.808274 | orchestrator | 2026-04-05 04:37:37.808284 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-05 04:37:37.808296 | orchestrator | Sunday 05 April 2026 04:37:07 +0000 (0:00:02.392) 0:01:59.695 ********** 2026-04-05 04:37:37.808338 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.808350 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.808361 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.808371 | orchestrator | 2026-04-05 04:37:37.808383 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-05 04:37:37.808394 | orchestrator | Sunday 05 April 2026 04:37:09 +0000 (0:00:02.305) 0:02:02.001 ********** 2026-04-05 04:37:37.808405 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:37:37.808416 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:37:37.808427 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:37:37.808438 | orchestrator | 2026-04-05 04:37:37.808459 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-05 04:37:37.808470 | orchestrator | Sunday 05 April 2026 04:37:36 +0000 (0:00:26.095) 0:02:28.097 ********** 2026-04-05 04:37:37.808481 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:37:37.808492 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:37:37.808504 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:37:37.808515 | orchestrator | 2026-04-05 04:37:37.808526 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-05 04:37:37.808548 | orchestrator | Sunday 05 April 2026 04:37:37 +0000 (0:00:01.791) 0:02:29.888 ********** 2026-04-05 04:38:28.642563 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:38:28.642657 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:38:28.642669 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:38:28.642676 | orchestrator | 2026-04-05 04:38:28.642684 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-05 04:38:28.642693 | orchestrator | Sunday 05 April 2026 04:37:39 +0000 (0:00:01.870) 0:02:31.759 ********** 2026-04-05 04:38:28.642697 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:38:28.642703 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:38:28.642707 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:38:28.642711 | orchestrator | 2026-04-05 04:38:28.642715 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-05 04:38:28.642719 | orchestrator | Sunday 05 April 2026 04:37:41 +0000 (0:00:01.863) 0:02:33.622 ********** 2026-04-05 04:38:28.642723 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:38:28.642727 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:38:28.642731 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:38:28.642735 | orchestrator | 2026-04-05 04:38:28.642739 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-05 04:38:28.642743 | orchestrator | Sunday 05 April 2026 04:37:43 +0000 (0:00:01.780) 0:02:35.402 ********** 2026-04-05 04:38:28.642746 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:38:28.642750 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:38:28.642754 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:38:28.642758 | orchestrator | 2026-04-05 04:38:28.642761 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-05 04:38:28.642765 | orchestrator | Sunday 05 April 2026 04:37:45 +0000 (0:00:01.715) 0:02:37.118 ********** 2026-04-05 04:38:28.642769 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:38:28.642773 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:38:28.642777 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:38:28.642780 | orchestrator | 2026-04-05 04:38:28.642784 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-05 04:38:28.642788 | orchestrator | Sunday 05 April 2026 04:37:46 +0000 (0:00:01.882) 0:02:39.000 ********** 2026-04-05 04:38:28.642792 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:38:28.642796 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:38:28.642800 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:38:28.642803 | orchestrator | 2026-04-05 04:38:28.642807 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-05 04:38:28.642811 | orchestrator | Sunday 05 April 2026 04:37:48 +0000 (0:00:01.751) 0:02:40.752 ********** 2026-04-05 04:38:28.642815 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:38:28.642818 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:38:28.642822 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:38:28.642826 | orchestrator | 2026-04-05 04:38:28.642830 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-05 04:38:28.642837 | orchestrator | Sunday 05 April 2026 04:37:50 +0000 (0:00:01.909) 0:02:42.662 ********** 2026-04-05 04:38:28.642842 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:38:28.642848 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:38:28.642854 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:38:28.642859 | orchestrator | 2026-04-05 04:38:28.642865 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-05 04:38:28.642871 | orchestrator | Sunday 05 April 2026 04:37:52 +0000 (0:00:02.329) 0:02:44.991 ********** 2026-04-05 04:38:28.642897 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:38:28.642905 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:38:28.642909 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:38:28.642912 | orchestrator | 2026-04-05 04:38:28.642916 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-05 04:38:28.642920 | orchestrator | Sunday 05 April 2026 04:37:54 +0000 (0:00:01.886) 0:02:46.877 ********** 2026-04-05 04:38:28.642924 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:38:28.642927 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:38:28.642931 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:38:28.642935 | orchestrator | 2026-04-05 04:38:28.642939 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-05 04:38:28.642942 | orchestrator | Sunday 05 April 2026 04:37:56 +0000 (0:00:01.493) 0:02:48.370 ********** 2026-04-05 04:38:28.642946 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:38:28.642950 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:38:28.642953 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:38:28.642957 | orchestrator | 2026-04-05 04:38:28.642961 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-05 04:38:28.642965 | orchestrator | Sunday 05 April 2026 04:37:58 +0000 (0:00:01.815) 0:02:50.186 ********** 2026-04-05 04:38:28.642968 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:38:28.642972 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:38:28.642976 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:38:28.642979 | orchestrator | 2026-04-05 04:38:28.642985 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-05 04:38:28.642993 | orchestrator | Sunday 05 April 2026 04:37:59 +0000 (0:00:01.825) 0:02:52.011 ********** 2026-04-05 04:38:28.642998 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 04:38:28.643013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 04:38:28.643023 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-05 04:38:28.643027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 04:38:28.643031 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 04:38:28.643036 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-05 04:38:28.643042 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 04:38:28.643050 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 04:38:28.643070 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-05 04:38:28.643076 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-05 04:38:28.643080 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 04:38:28.643084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 04:38:28.643088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-05 04:38:28.643091 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 04:38:28.643095 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 04:38:28.643099 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-05 04:38:28.643102 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 04:38:28.643113 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 04:38:28.643119 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-05 04:38:28.643125 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-05 04:38:28.643130 | orchestrator | 2026-04-05 04:38:28.643136 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-05 04:38:28.643142 | orchestrator | 2026-04-05 04:38:28.643148 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-05 04:38:28.643154 | orchestrator | Sunday 05 April 2026 04:38:04 +0000 (0:00:04.715) 0:02:56.727 ********** 2026-04-05 04:38:28.643160 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:38:28.643166 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:38:28.643173 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:38:28.643180 | orchestrator | 2026-04-05 04:38:28.643186 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-05 04:38:28.643192 | orchestrator | Sunday 05 April 2026 04:38:06 +0000 (0:00:01.760) 0:02:58.487 ********** 2026-04-05 04:38:28.643198 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:38:28.643203 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:38:28.643207 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:38:28.643229 | orchestrator | 2026-04-05 04:38:28.643234 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-05 04:38:28.643239 | orchestrator | Sunday 05 April 2026 04:38:08 +0000 (0:00:01.794) 0:03:00.282 ********** 2026-04-05 04:38:28.643243 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:38:28.643248 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:38:28.643252 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:38:28.643256 | orchestrator | 2026-04-05 04:38:28.643261 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-05 04:38:28.643265 | orchestrator | Sunday 05 April 2026 04:38:09 +0000 (0:00:01.469) 0:03:01.751 ********** 2026-04-05 04:38:28.643270 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:38:28.643275 | orchestrator | 2026-04-05 04:38:28.643279 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-05 04:38:28.643284 | orchestrator | Sunday 05 April 2026 04:38:11 +0000 (0:00:02.051) 0:03:03.802 ********** 2026-04-05 04:38:28.643289 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:38:28.643293 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:38:28.643297 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:38:28.643300 | orchestrator | 2026-04-05 04:38:28.643304 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-05 04:38:28.643308 | orchestrator | Sunday 05 April 2026 04:38:13 +0000 (0:00:01.362) 0:03:05.165 ********** 2026-04-05 04:38:28.643311 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:38:28.643315 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:38:28.643319 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:38:28.643323 | orchestrator | 2026-04-05 04:38:28.643326 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-05 04:38:28.643330 | orchestrator | Sunday 05 April 2026 04:38:14 +0000 (0:00:01.462) 0:03:06.628 ********** 2026-04-05 04:38:28.643334 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:38:28.643337 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:38:28.643341 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:38:28.643345 | orchestrator | 2026-04-05 04:38:28.643349 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-05 04:38:28.643352 | orchestrator | Sunday 05 April 2026 04:38:15 +0000 (0:00:01.430) 0:03:08.058 ********** 2026-04-05 04:38:28.643356 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:38:28.643360 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:38:28.643364 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:38:28.643367 | orchestrator | 2026-04-05 04:38:28.643371 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-05 04:38:28.643381 | orchestrator | Sunday 05 April 2026 04:38:17 +0000 (0:00:01.776) 0:03:09.834 ********** 2026-04-05 04:38:28.643386 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:38:28.643389 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:38:28.643393 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:38:28.643397 | orchestrator | 2026-04-05 04:38:28.643401 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-05 04:38:28.643404 | orchestrator | Sunday 05 April 2026 04:38:19 +0000 (0:00:02.235) 0:03:12.070 ********** 2026-04-05 04:38:28.643408 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:38:28.643412 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:38:28.643415 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:38:28.643421 | orchestrator | 2026-04-05 04:38:28.643428 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-05 04:38:28.643434 | orchestrator | Sunday 05 April 2026 04:38:22 +0000 (0:00:02.384) 0:03:14.454 ********** 2026-04-05 04:38:28.643445 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:39:43.178353 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:39:43.178453 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:39:43.178469 | orchestrator | 2026-04-05 04:39:43.178480 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-05 04:39:43.178491 | orchestrator | 2026-04-05 04:39:43.178501 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-05 04:39:43.178511 | orchestrator | Sunday 05 April 2026 04:38:30 +0000 (0:00:08.431) 0:03:22.887 ********** 2026-04-05 04:39:43.178522 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.178533 | orchestrator | 2026-04-05 04:39:43.178543 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-05 04:39:43.178553 | orchestrator | Sunday 05 April 2026 04:38:32 +0000 (0:00:02.171) 0:03:25.058 ********** 2026-04-05 04:39:43.178564 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.178575 | orchestrator | 2026-04-05 04:39:43.178585 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-05 04:39:43.178595 | orchestrator | Sunday 05 April 2026 04:38:34 +0000 (0:00:01.626) 0:03:26.684 ********** 2026-04-05 04:39:43.178605 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 04:39:43.178614 | orchestrator | 2026-04-05 04:39:43.178624 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-05 04:39:43.178634 | orchestrator | Sunday 05 April 2026 04:38:36 +0000 (0:00:01.655) 0:03:28.340 ********** 2026-04-05 04:39:43.178643 | orchestrator | changed: [testbed-manager] 2026-04-05 04:39:43.178653 | orchestrator | 2026-04-05 04:39:43.178663 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-05 04:39:43.178674 | orchestrator | Sunday 05 April 2026 04:38:38 +0000 (0:00:02.073) 0:03:30.414 ********** 2026-04-05 04:39:43.178684 | orchestrator | changed: [testbed-manager] 2026-04-05 04:39:43.178694 | orchestrator | 2026-04-05 04:39:43.178704 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-05 04:39:43.178714 | orchestrator | Sunday 05 April 2026 04:38:39 +0000 (0:00:01.659) 0:03:32.074 ********** 2026-04-05 04:39:43.178726 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 04:39:43.178736 | orchestrator | 2026-04-05 04:39:43.178747 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-05 04:39:43.178757 | orchestrator | Sunday 05 April 2026 04:38:43 +0000 (0:00:03.437) 0:03:35.511 ********** 2026-04-05 04:39:43.178767 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-05 04:39:43.178777 | orchestrator | 2026-04-05 04:39:43.178788 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-05 04:39:43.178797 | orchestrator | Sunday 05 April 2026 04:38:45 +0000 (0:00:02.121) 0:03:37.633 ********** 2026-04-05 04:39:43.178807 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.178817 | orchestrator | 2026-04-05 04:39:43.178827 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-05 04:39:43.178837 | orchestrator | Sunday 05 April 2026 04:38:46 +0000 (0:00:01.435) 0:03:39.069 ********** 2026-04-05 04:39:43.178871 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.178882 | orchestrator | 2026-04-05 04:39:43.178893 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-05 04:39:43.178902 | orchestrator | 2026-04-05 04:39:43.178913 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-05 04:39:43.178923 | orchestrator | Sunday 05 April 2026 04:38:49 +0000 (0:00:02.118) 0:03:41.188 ********** 2026-04-05 04:39:43.178933 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.178943 | orchestrator | 2026-04-05 04:39:43.178953 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-05 04:39:43.178964 | orchestrator | Sunday 05 April 2026 04:38:50 +0000 (0:00:01.248) 0:03:42.436 ********** 2026-04-05 04:39:43.178974 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 04:39:43.178984 | orchestrator | 2026-04-05 04:39:43.178995 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-05 04:39:43.179005 | orchestrator | Sunday 05 April 2026 04:38:52 +0000 (0:00:01.782) 0:03:44.220 ********** 2026-04-05 04:39:43.179015 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.179025 | orchestrator | 2026-04-05 04:39:43.179035 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-05 04:39:43.179046 | orchestrator | Sunday 05 April 2026 04:38:54 +0000 (0:00:01.979) 0:03:46.200 ********** 2026-04-05 04:39:43.179056 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.179066 | orchestrator | 2026-04-05 04:39:43.179076 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-05 04:39:43.179086 | orchestrator | Sunday 05 April 2026 04:38:57 +0000 (0:00:03.082) 0:03:49.282 ********** 2026-04-05 04:39:43.179120 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.179130 | orchestrator | 2026-04-05 04:39:43.179139 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-05 04:39:43.179148 | orchestrator | Sunday 05 April 2026 04:38:58 +0000 (0:00:01.520) 0:03:50.803 ********** 2026-04-05 04:39:43.179157 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.179166 | orchestrator | 2026-04-05 04:39:43.179175 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-05 04:39:43.179184 | orchestrator | Sunday 05 April 2026 04:39:00 +0000 (0:00:01.542) 0:03:52.345 ********** 2026-04-05 04:39:43.179194 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.179203 | orchestrator | 2026-04-05 04:39:43.179229 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-05 04:39:43.179240 | orchestrator | Sunday 05 April 2026 04:39:01 +0000 (0:00:01.734) 0:03:54.080 ********** 2026-04-05 04:39:43.179249 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.179258 | orchestrator | 2026-04-05 04:39:43.179267 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-05 04:39:43.179276 | orchestrator | Sunday 05 April 2026 04:39:04 +0000 (0:00:02.894) 0:03:56.974 ********** 2026-04-05 04:39:43.179285 | orchestrator | ok: [testbed-manager] 2026-04-05 04:39:43.179294 | orchestrator | 2026-04-05 04:39:43.179303 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-05 04:39:43.179312 | orchestrator | 2026-04-05 04:39:43.179320 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-05 04:39:43.179344 | orchestrator | Sunday 05 April 2026 04:39:06 +0000 (0:00:02.070) 0:03:59.045 ********** 2026-04-05 04:39:43.179354 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:39:43.179362 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:39:43.179371 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:39:43.179379 | orchestrator | 2026-04-05 04:39:43.179387 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-05 04:39:43.179395 | orchestrator | Sunday 05 April 2026 04:39:08 +0000 (0:00:01.493) 0:04:00.539 ********** 2026-04-05 04:39:43.179404 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:39:43.179413 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:39:43.179431 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:39:43.179440 | orchestrator | 2026-04-05 04:39:43.179450 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-05 04:39:43.179459 | orchestrator | Sunday 05 April 2026 04:39:09 +0000 (0:00:01.545) 0:04:02.084 ********** 2026-04-05 04:39:43.179469 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:39:43.179479 | orchestrator | 2026-04-05 04:39:43.179488 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-05 04:39:43.179497 | orchestrator | Sunday 05 April 2026 04:39:12 +0000 (0:00:02.175) 0:04:04.260 ********** 2026-04-05 04:39:43.179506 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179515 | orchestrator | 2026-04-05 04:39:43.179524 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-05 04:39:43.179534 | orchestrator | Sunday 05 April 2026 04:39:14 +0000 (0:00:01.999) 0:04:06.260 ********** 2026-04-05 04:39:43.179544 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179553 | orchestrator | 2026-04-05 04:39:43.179562 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-05 04:39:43.179571 | orchestrator | Sunday 05 April 2026 04:39:16 +0000 (0:00:02.101) 0:04:08.361 ********** 2026-04-05 04:39:43.179580 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:39:43.179589 | orchestrator | 2026-04-05 04:39:43.179598 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-05 04:39:43.179607 | orchestrator | Sunday 05 April 2026 04:39:17 +0000 (0:00:01.220) 0:04:09.582 ********** 2026-04-05 04:39:43.179615 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179625 | orchestrator | 2026-04-05 04:39:43.179634 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-05 04:39:43.179643 | orchestrator | Sunday 05 April 2026 04:39:19 +0000 (0:00:02.219) 0:04:11.801 ********** 2026-04-05 04:39:43.179652 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179660 | orchestrator | 2026-04-05 04:39:43.179669 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-05 04:39:43.179678 | orchestrator | Sunday 05 April 2026 04:39:22 +0000 (0:00:02.522) 0:04:14.324 ********** 2026-04-05 04:39:43.179686 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179696 | orchestrator | 2026-04-05 04:39:43.179705 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-05 04:39:43.179714 | orchestrator | Sunday 05 April 2026 04:39:23 +0000 (0:00:01.267) 0:04:15.592 ********** 2026-04-05 04:39:43.179723 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179732 | orchestrator | 2026-04-05 04:39:43.179741 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-05 04:39:43.179750 | orchestrator | Sunday 05 April 2026 04:39:24 +0000 (0:00:01.279) 0:04:16.871 ********** 2026-04-05 04:39:43.179759 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-04-05 04:39:43.179768 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-04-05 04:39:43.179778 | orchestrator | } 2026-04-05 04:39:43.179788 | orchestrator | 2026-04-05 04:39:43.179797 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-05 04:39:43.179806 | orchestrator | Sunday 05 April 2026 04:39:26 +0000 (0:00:01.240) 0:04:18.112 ********** 2026-04-05 04:39:43.179814 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:39:43.179822 | orchestrator | 2026-04-05 04:39:43.179830 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-05 04:39:43.179839 | orchestrator | Sunday 05 April 2026 04:39:27 +0000 (0:00:01.159) 0:04:19.272 ********** 2026-04-05 04:39:43.179847 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-05 04:39:43.179855 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-05 04:39:43.179863 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-05 04:39:43.179878 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-05 04:39:43.179886 | orchestrator | 2026-04-05 04:39:43.179895 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-05 04:39:43.179904 | orchestrator | Sunday 05 April 2026 04:39:33 +0000 (0:00:05.948) 0:04:25.220 ********** 2026-04-05 04:39:43.179913 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179923 | orchestrator | 2026-04-05 04:39:43.179937 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-05 04:39:43.179946 | orchestrator | Sunday 05 April 2026 04:39:35 +0000 (0:00:02.460) 0:04:27.681 ********** 2026-04-05 04:39:43.179955 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.179964 | orchestrator | 2026-04-05 04:39:43.179973 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-05 04:39:43.179982 | orchestrator | Sunday 05 April 2026 04:39:38 +0000 (0:00:02.963) 0:04:30.645 ********** 2026-04-05 04:39:43.179991 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-05 04:39:43.180000 | orchestrator | 2026-04-05 04:39:43.180025 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-05 04:39:43.180035 | orchestrator | Sunday 05 April 2026 04:39:42 +0000 (0:00:04.444) 0:04:35.089 ********** 2026-04-05 04:39:43.180044 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:39:43.180053 | orchestrator | 2026-04-05 04:39:43.180070 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-05 04:40:17.527095 | orchestrator | Sunday 05 April 2026 04:39:44 +0000 (0:00:01.241) 0:04:36.331 ********** 2026-04-05 04:40:17.527179 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-05 04:40:17.527193 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-05 04:40:17.527206 | orchestrator | 2026-04-05 04:40:17.527222 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-05 04:40:17.527232 | orchestrator | Sunday 05 April 2026 04:39:47 +0000 (0:00:03.211) 0:04:39.543 ********** 2026-04-05 04:40:17.527242 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:40:17.527253 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:40:17.527262 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:40:17.527271 | orchestrator | 2026-04-05 04:40:17.527279 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-05 04:40:17.527289 | orchestrator | Sunday 05 April 2026 04:39:49 +0000 (0:00:01.577) 0:04:41.120 ********** 2026-04-05 04:40:17.527298 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:40:17.527309 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:40:17.527319 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:40:17.527328 | orchestrator | 2026-04-05 04:40:17.527338 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-05 04:40:17.527347 | orchestrator | 2026-04-05 04:40:17.527356 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-05 04:40:17.527365 | orchestrator | Sunday 05 April 2026 04:39:51 +0000 (0:00:02.480) 0:04:43.601 ********** 2026-04-05 04:40:17.527375 | orchestrator | ok: [testbed-manager] 2026-04-05 04:40:17.527385 | orchestrator | 2026-04-05 04:40:17.527395 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-05 04:40:17.527403 | orchestrator | Sunday 05 April 2026 04:39:52 +0000 (0:00:01.168) 0:04:44.769 ********** 2026-04-05 04:40:17.527409 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-05 04:40:17.527416 | orchestrator | 2026-04-05 04:40:17.527421 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-05 04:40:17.527427 | orchestrator | Sunday 05 April 2026 04:39:54 +0000 (0:00:01.606) 0:04:46.375 ********** 2026-04-05 04:40:17.527434 | orchestrator | ok: [testbed-manager] 2026-04-05 04:40:17.527439 | orchestrator | 2026-04-05 04:40:17.527445 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-05 04:40:17.527468 | orchestrator | 2026-04-05 04:40:17.527474 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-05 04:40:17.527480 | orchestrator | Sunday 05 April 2026 04:39:59 +0000 (0:00:05.073) 0:04:51.449 ********** 2026-04-05 04:40:17.527486 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:40:17.527491 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:40:17.527497 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:40:17.527503 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:40:17.527508 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:40:17.527514 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:40:17.527519 | orchestrator | 2026-04-05 04:40:17.527525 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-05 04:40:17.527531 | orchestrator | Sunday 05 April 2026 04:40:01 +0000 (0:00:02.138) 0:04:53.588 ********** 2026-04-05 04:40:17.527537 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 04:40:17.527542 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 04:40:17.527548 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-05 04:40:17.527553 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 04:40:17.527559 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 04:40:17.527565 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 04:40:17.527570 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-05 04:40:17.527576 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 04:40:17.527582 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-05 04:40:17.527588 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 04:40:17.527594 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 04:40:17.527599 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 04:40:17.527606 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 04:40:17.527613 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-05 04:40:17.527621 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-05 04:40:17.527628 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 04:40:17.527635 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 04:40:17.527641 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-05 04:40:17.527648 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 04:40:17.527655 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 04:40:17.527662 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-05 04:40:17.527683 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 04:40:17.527690 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 04:40:17.527697 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-05 04:40:17.527703 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 04:40:17.527710 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 04:40:17.527720 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-05 04:40:17.527735 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 04:40:17.527755 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 04:40:17.527765 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-05 04:40:17.527774 | orchestrator | 2026-04-05 04:40:17.527784 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-05 04:40:17.527793 | orchestrator | Sunday 05 April 2026 04:40:12 +0000 (0:00:11.208) 0:05:04.796 ********** 2026-04-05 04:40:17.527802 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:40:17.527812 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:40:17.527822 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:40:17.527832 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:40:17.527842 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:40:17.527853 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:40:17.527863 | orchestrator | 2026-04-05 04:40:17.527873 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-05 04:40:17.527882 | orchestrator | Sunday 05 April 2026 04:40:14 +0000 (0:00:01.917) 0:05:06.713 ********** 2026-04-05 04:40:17.527889 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:40:17.527896 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:40:17.527902 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:40:17.527908 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:40:17.527913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:40:17.527919 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:40:17.527925 | orchestrator | 2026-04-05 04:40:17.527930 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:40:17.527937 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:40:17.527945 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 04:40:17.527951 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 04:40:17.527957 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 04:40:17.527963 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 04:40:17.527969 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 04:40:17.527974 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 04:40:17.527980 | orchestrator | 2026-04-05 04:40:17.527985 | orchestrator | 2026-04-05 04:40:17.527991 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:40:17.527997 | orchestrator | Sunday 05 April 2026 04:40:17 +0000 (0:00:02.891) 0:05:09.605 ********** 2026-04-05 04:40:17.528003 | orchestrator | =============================================================================== 2026-04-05 04:40:17.528009 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 33.71s 2026-04-05 04:40:17.528015 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.09s 2026-04-05 04:40:17.528020 | orchestrator | Manage labels ---------------------------------------------------------- 11.21s 2026-04-05 04:40:17.528026 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.43s 2026-04-05 04:40:17.528031 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.95s 2026-04-05 04:40:17.528070 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.07s 2026-04-05 04:40:17.528077 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.72s 2026-04-05 04:40:17.528083 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.44s 2026-04-05 04:40:17.528089 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 4.17s 2026-04-05 04:40:17.528095 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 3.84s 2026-04-05 04:40:17.528100 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.44s 2026-04-05 04:40:17.528106 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.21s 2026-04-05 04:40:17.528117 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.17s 2026-04-05 04:40:17.938244 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 3.08s 2026-04-05 04:40:17.938343 | orchestrator | k3s_prereq : Load br_netfilter ------------------------------------------ 3.06s 2026-04-05 04:40:17.938361 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.96s 2026-04-05 04:40:17.938376 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.93s 2026-04-05 04:40:17.938392 | orchestrator | kubectl : Install required packages ------------------------------------- 2.90s 2026-04-05 04:40:17.938408 | orchestrator | Manage taints ----------------------------------------------------------- 2.89s 2026-04-05 04:40:17.938424 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.81s 2026-04-05 04:40:18.190295 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-05 04:40:18.190373 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-04-05 04:40:18.197151 | orchestrator | + set -e 2026-04-05 04:40:18.197208 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 04:40:18.197215 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 04:40:18.197221 | orchestrator | ++ INTERACTIVE=false 2026-04-05 04:40:18.197225 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 04:40:18.197229 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 04:40:18.197234 | orchestrator | + osism apply openstackclient 2026-04-05 04:40:29.705167 | orchestrator | 2026-04-05 04:40:29 | INFO  | Prepare task for execution of openstackclient. 2026-04-05 04:40:29.789119 | orchestrator | 2026-04-05 04:40:29 | INFO  | Task ad8b758a-dd9b-407b-83e6-fcb937d390fd (openstackclient) was prepared for execution. 2026-04-05 04:40:29.789226 | orchestrator | 2026-04-05 04:40:29 | INFO  | It takes a moment until task ad8b758a-dd9b-407b-83e6-fcb937d390fd (openstackclient) has been started and output is visible here. 2026-04-05 04:41:04.341627 | orchestrator | 2026-04-05 04:41:04.341716 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-05 04:41:04.341732 | orchestrator | 2026-04-05 04:41:04.341750 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-05 04:41:04.341767 | orchestrator | Sunday 05 April 2026 04:40:35 +0000 (0:00:02.253) 0:00:02.253 ********** 2026-04-05 04:41:04.341784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-05 04:41:04.341802 | orchestrator | 2026-04-05 04:41:04.341820 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-05 04:41:04.341830 | orchestrator | Sunday 05 April 2026 04:40:37 +0000 (0:00:01.922) 0:00:04.175 ********** 2026-04-05 04:41:04.341840 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-05 04:41:04.341851 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-05 04:41:04.341861 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-05 04:41:04.341871 | orchestrator | 2026-04-05 04:41:04.341881 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-05 04:41:04.341914 | orchestrator | Sunday 05 April 2026 04:40:40 +0000 (0:00:02.816) 0:00:06.992 ********** 2026-04-05 04:41:04.341924 | orchestrator | changed: [testbed-manager] 2026-04-05 04:41:04.341934 | orchestrator | 2026-04-05 04:41:04.341944 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-05 04:41:04.341953 | orchestrator | Sunday 05 April 2026 04:40:42 +0000 (0:00:02.336) 0:00:09.328 ********** 2026-04-05 04:41:04.341963 | orchestrator | ok: [testbed-manager] 2026-04-05 04:41:04.341996 | orchestrator | 2026-04-05 04:41:04.342007 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-05 04:41:04.342057 | orchestrator | Sunday 05 April 2026 04:40:45 +0000 (0:00:02.083) 0:00:11.411 ********** 2026-04-05 04:41:04.342067 | orchestrator | ok: [testbed-manager] 2026-04-05 04:41:04.342077 | orchestrator | 2026-04-05 04:41:04.342086 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-05 04:41:04.342096 | orchestrator | Sunday 05 April 2026 04:40:46 +0000 (0:00:01.955) 0:00:13.367 ********** 2026-04-05 04:41:04.342105 | orchestrator | ok: [testbed-manager] 2026-04-05 04:41:04.342115 | orchestrator | 2026-04-05 04:41:04.342124 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-05 04:41:04.342134 | orchestrator | Sunday 05 April 2026 04:40:48 +0000 (0:00:01.594) 0:00:14.962 ********** 2026-04-05 04:41:04.342144 | orchestrator | changed: [testbed-manager] 2026-04-05 04:41:04.342153 | orchestrator | 2026-04-05 04:41:04.342163 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-05 04:41:04.342172 | orchestrator | Sunday 05 April 2026 04:40:58 +0000 (0:00:10.234) 0:00:25.196 ********** 2026-04-05 04:41:04.342182 | orchestrator | changed: [testbed-manager] 2026-04-05 04:41:04.342192 | orchestrator | 2026-04-05 04:41:04.342203 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-05 04:41:04.342216 | orchestrator | Sunday 05 April 2026 04:41:00 +0000 (0:00:01.748) 0:00:26.945 ********** 2026-04-05 04:41:04.342227 | orchestrator | changed: [testbed-manager] 2026-04-05 04:41:04.342238 | orchestrator | 2026-04-05 04:41:04.342249 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-05 04:41:04.342261 | orchestrator | Sunday 05 April 2026 04:41:02 +0000 (0:00:01.576) 0:00:28.522 ********** 2026-04-05 04:41:04.342272 | orchestrator | ok: [testbed-manager] 2026-04-05 04:41:04.342283 | orchestrator | 2026-04-05 04:41:04.342294 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:41:04.342306 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 04:41:04.342319 | orchestrator | 2026-04-05 04:41:04.342330 | orchestrator | 2026-04-05 04:41:04.342339 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:41:04.342349 | orchestrator | Sunday 05 April 2026 04:41:03 +0000 (0:00:01.830) 0:00:30.352 ********** 2026-04-05 04:41:04.342359 | orchestrator | =============================================================================== 2026-04-05 04:41:04.342368 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.23s 2026-04-05 04:41:04.342378 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.82s 2026-04-05 04:41:04.342387 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.34s 2026-04-05 04:41:04.342397 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.08s 2026-04-05 04:41:04.342406 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.96s 2026-04-05 04:41:04.342416 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.92s 2026-04-05 04:41:04.342425 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.83s 2026-04-05 04:41:04.342434 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.75s 2026-04-05 04:41:04.342444 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.59s 2026-04-05 04:41:04.342462 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.58s 2026-04-05 04:41:04.552712 | orchestrator | + osism apply -a upgrade common 2026-04-05 04:41:05.902749 | orchestrator | 2026-04-05 04:41:05 | INFO  | Prepare task for execution of common. 2026-04-05 04:41:05.981184 | orchestrator | 2026-04-05 04:41:05 | INFO  | Task 104641de-e09a-4aa4-99ea-b23620364dfd (common) was prepared for execution. 2026-04-05 04:41:05.981263 | orchestrator | 2026-04-05 04:41:05 | INFO  | It takes a moment until task 104641de-e09a-4aa4-99ea-b23620364dfd (common) has been started and output is visible here. 2026-04-05 04:41:25.606540 | orchestrator | 2026-04-05 04:41:25.606655 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-05 04:41:25.606673 | orchestrator | 2026-04-05 04:41:25.606686 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 04:41:25.606698 | orchestrator | Sunday 05 April 2026 04:41:12 +0000 (0:00:02.800) 0:00:02.800 ********** 2026-04-05 04:41:25.606709 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:41:25.606722 | orchestrator | 2026-04-05 04:41:25.606733 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-05 04:41:25.606744 | orchestrator | Sunday 05 April 2026 04:41:15 +0000 (0:00:03.443) 0:00:06.243 ********** 2026-04-05 04:41:25.606755 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 04:41:25.606766 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 04:41:25.606777 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 04:41:25.606788 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 04:41:25.606800 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 04:41:25.606810 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 04:41:25.606821 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 04:41:25.606832 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 04:41:25.606844 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 04:41:25.606855 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 04:41:25.606865 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 04:41:25.606876 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 04:41:25.606887 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 04:41:25.606917 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 04:41:25.606929 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 04:41:25.606987 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 04:41:25.607000 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 04:41:25.607016 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-05 04:41:25.607028 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-05 04:41:25.607039 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 04:41:25.607049 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-05 04:41:25.607060 | orchestrator | 2026-04-05 04:41:25.607071 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-05 04:41:25.607104 | orchestrator | Sunday 05 April 2026 04:41:20 +0000 (0:00:04.828) 0:00:11.072 ********** 2026-04-05 04:41:25.607116 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 04:41:25.607128 | orchestrator | 2026-04-05 04:41:25.607139 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-05 04:41:25.607150 | orchestrator | Sunday 05 April 2026 04:41:23 +0000 (0:00:02.652) 0:00:13.724 ********** 2026-04-05 04:41:25.607164 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:25.607184 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:25.607214 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:25.607228 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:25.607239 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:25.607255 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:25.607267 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:25.607287 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:25.607299 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:25.607318 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256294 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256420 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256432 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256471 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256484 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:30.256496 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:30.256508 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256538 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256751 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:30.256784 | orchestrator | 2026-04-05 04:41:30.256797 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-05 04:41:30.256808 | orchestrator | Sunday 05 April 2026 04:41:29 +0000 (0:00:06.642) 0:00:20.367 ********** 2026-04-05 04:41:30.256821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:30.256837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:30.256851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:30.256865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:30.256888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.225876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:31.226177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226190 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:41:31.226203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:31.226265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226316 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:41:31.226327 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:41:31.226337 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:41:31.226348 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:41:31.226359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:31.226371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:31.226394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:31.226405 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:41:31.226426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685352 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:41:33.685369 | orchestrator | 2026-04-05 04:41:33.685382 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-05 04:41:33.685394 | orchestrator | Sunday 05 April 2026 04:41:32 +0000 (0:00:02.804) 0:00:23.172 ********** 2026-04-05 04:41:33.685407 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:33.685465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:33.685479 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685504 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:33.685576 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:41:33.685593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:33.685605 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:41:33.685617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:33.685650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:33.685680 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:41:33.685691 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:41:33.685711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:46.431078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:46.431204 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:41:46.431223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:46.431236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:46.431247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:41:46.431258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:46.431268 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:41:46.431279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:46.431313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:41:46.431324 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:41:46.431334 | orchestrator | 2026-04-05 04:41:46.431345 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-05 04:41:46.431356 | orchestrator | Sunday 05 April 2026 04:41:35 +0000 (0:00:03.358) 0:00:26.530 ********** 2026-04-05 04:41:46.431365 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:41:46.431375 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:41:46.431385 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:41:46.431410 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:41:46.431420 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:41:46.431445 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:41:46.431456 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:41:46.431475 | orchestrator | 2026-04-05 04:41:46.431486 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-05 04:41:46.431496 | orchestrator | Sunday 05 April 2026 04:41:37 +0000 (0:00:01.869) 0:00:28.400 ********** 2026-04-05 04:41:46.431505 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:41:46.431514 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:41:46.431524 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:41:46.431533 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:41:46.431543 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:41:46.431552 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:41:46.431562 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:41:46.431571 | orchestrator | 2026-04-05 04:41:46.431581 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-05 04:41:46.431591 | orchestrator | Sunday 05 April 2026 04:41:39 +0000 (0:00:02.044) 0:00:30.444 ********** 2026-04-05 04:41:46.431600 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:41:46.431610 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:41:46.431619 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:41:46.431629 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:41:46.431638 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:41:46.431647 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:41:46.431657 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:41:46.431666 | orchestrator | 2026-04-05 04:41:46.431675 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-05 04:41:46.431685 | orchestrator | Sunday 05 April 2026 04:41:41 +0000 (0:00:02.131) 0:00:32.577 ********** 2026-04-05 04:41:46.431695 | orchestrator | changed: [testbed-manager] 2026-04-05 04:41:46.431704 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:41:46.431714 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:41:46.431723 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:41:46.431733 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:41:46.431742 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:41:46.431752 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:41:46.431761 | orchestrator | 2026-04-05 04:41:46.431770 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-05 04:41:46.431780 | orchestrator | Sunday 05 April 2026 04:41:44 +0000 (0:00:03.093) 0:00:35.670 ********** 2026-04-05 04:41:46.431790 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:46.431809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:46.431819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:46.431829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:46.431859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:50.809866 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810161 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:50.810291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:41:50.810302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:41:50.810358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:12.609323 | orchestrator | 2026-04-05 04:42:12.609442 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-05 04:42:12.609472 | orchestrator | Sunday 05 April 2026 04:41:51 +0000 (0:00:06.934) 0:00:42.604 ********** 2026-04-05 04:42:12.609491 | orchestrator | [WARNING]: Skipped 2026-04-05 04:42:12.609512 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-05 04:42:12.609565 | orchestrator | to this access issue: 2026-04-05 04:42:12.609586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-05 04:42:12.609604 | orchestrator | directory 2026-04-05 04:42:12.609624 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:42:12.609643 | orchestrator | 2026-04-05 04:42:12.609661 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-05 04:42:12.609679 | orchestrator | Sunday 05 April 2026 04:41:54 +0000 (0:00:02.462) 0:00:45.067 ********** 2026-04-05 04:42:12.609699 | orchestrator | [WARNING]: Skipped 2026-04-05 04:42:12.609718 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-05 04:42:12.609738 | orchestrator | to this access issue: 2026-04-05 04:42:12.609758 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-05 04:42:12.609777 | orchestrator | directory 2026-04-05 04:42:12.609797 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:42:12.609817 | orchestrator | 2026-04-05 04:42:12.609837 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-05 04:42:12.609858 | orchestrator | Sunday 05 April 2026 04:41:56 +0000 (0:00:01.947) 0:00:47.015 ********** 2026-04-05 04:42:12.609908 | orchestrator | [WARNING]: Skipped 2026-04-05 04:42:12.609928 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-05 04:42:12.609947 | orchestrator | to this access issue: 2026-04-05 04:42:12.609967 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-05 04:42:12.609985 | orchestrator | directory 2026-04-05 04:42:12.610003 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:42:12.610110 | orchestrator | 2026-04-05 04:42:12.610134 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-05 04:42:12.610154 | orchestrator | Sunday 05 April 2026 04:41:58 +0000 (0:00:02.091) 0:00:49.107 ********** 2026-04-05 04:42:12.610175 | orchestrator | [WARNING]: Skipped 2026-04-05 04:42:12.610197 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-05 04:42:12.610218 | orchestrator | to this access issue: 2026-04-05 04:42:12.610238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-05 04:42:12.610259 | orchestrator | directory 2026-04-05 04:42:12.610273 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 04:42:12.610284 | orchestrator | 2026-04-05 04:42:12.610295 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-05 04:42:12.610306 | orchestrator | Sunday 05 April 2026 04:42:00 +0000 (0:00:01.919) 0:00:51.026 ********** 2026-04-05 04:42:12.610316 | orchestrator | changed: [testbed-manager] 2026-04-05 04:42:12.610327 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:42:12.610338 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:42:12.610348 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:42:12.610359 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:42:12.610369 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:42:12.610380 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:42:12.610390 | orchestrator | 2026-04-05 04:42:12.610401 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-05 04:42:12.610412 | orchestrator | Sunday 05 April 2026 04:42:04 +0000 (0:00:04.498) 0:00:55.525 ********** 2026-04-05 04:42:12.610422 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 04:42:12.610435 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 04:42:12.610446 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 04:42:12.610457 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 04:42:12.610468 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 04:42:12.610493 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 04:42:12.610504 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-05 04:42:12.610515 | orchestrator | 2026-04-05 04:42:12.610526 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-05 04:42:12.610536 | orchestrator | Sunday 05 April 2026 04:42:08 +0000 (0:00:03.622) 0:00:59.147 ********** 2026-04-05 04:42:12.610547 | orchestrator | ok: [testbed-manager] 2026-04-05 04:42:12.610558 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:42:12.610568 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:42:12.610579 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:42:12.610590 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:42:12.610600 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:42:12.610610 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:42:12.610621 | orchestrator | 2026-04-05 04:42:12.610645 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-05 04:42:12.610656 | orchestrator | Sunday 05 April 2026 04:42:11 +0000 (0:00:03.301) 0:01:02.449 ********** 2026-04-05 04:42:12.610695 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:12.610712 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:12.610724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:12.610736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:12.610747 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:12.610765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:12.610778 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:12.610798 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:19.781642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:19.781772 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:19.781798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:19.781813 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:19.781851 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:19.781864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:19.781944 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:19.781976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:19.781990 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:19.782002 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:19.782014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:19.782078 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:19.782100 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:19.782112 | orchestrator | 2026-04-05 04:42:19.782125 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-05 04:42:19.782137 | orchestrator | Sunday 05 April 2026 04:42:14 +0000 (0:00:03.071) 0:01:05.521 ********** 2026-04-05 04:42:19.782148 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 04:42:19.782160 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 04:42:19.782171 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 04:42:19.782181 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 04:42:19.782192 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 04:42:19.782203 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 04:42:19.782214 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-05 04:42:19.782224 | orchestrator | 2026-04-05 04:42:19.782241 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-05 04:42:19.782252 | orchestrator | Sunday 05 April 2026 04:42:18 +0000 (0:00:03.279) 0:01:08.801 ********** 2026-04-05 04:42:19.782262 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 04:42:19.782273 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 04:42:19.782287 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 04:42:19.782298 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 04:42:19.782318 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 04:42:24.672057 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 04:42:24.672161 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-05 04:42:24.672176 | orchestrator | 2026-04-05 04:42:24.672188 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-05 04:42:24.672201 | orchestrator | Sunday 05 April 2026 04:42:21 +0000 (0:00:03.648) 0:01:12.450 ********** 2026-04-05 04:42:24.672215 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:24.672230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:24.672268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:24.672281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:24.672293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:24.672319 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672444 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672464 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:24.672535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:30.053016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:30.053137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-05 04:42:30.053153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:30.053163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:30.053171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:30.053194 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 04:42:30.053204 | orchestrator | 2026-04-05 04:42:30.053217 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-05 04:42:30.053226 | orchestrator | Sunday 05 April 2026 04:42:27 +0000 (0:00:05.727) 0:01:18.177 ********** 2026-04-05 04:42:30.053236 | orchestrator | changed: [testbed-manager] => { 2026-04-05 04:42:30.053245 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:42:30.053253 | orchestrator | } 2026-04-05 04:42:30.053261 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 04:42:30.053269 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:42:30.053277 | orchestrator | } 2026-04-05 04:42:30.053285 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 04:42:30.053293 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:42:30.053301 | orchestrator | } 2026-04-05 04:42:30.053308 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 04:42:30.053316 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:42:30.053324 | orchestrator | } 2026-04-05 04:42:30.053332 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 04:42:30.053339 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:42:30.053353 | orchestrator | } 2026-04-05 04:42:30.053361 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 04:42:30.053369 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:42:30.053377 | orchestrator | } 2026-04-05 04:42:30.053384 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 04:42:30.053392 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:42:30.053400 | orchestrator | } 2026-04-05 04:42:30.053408 | orchestrator | 2026-04-05 04:42:30.053416 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 04:42:30.053439 | orchestrator | Sunday 05 April 2026 04:42:29 +0000 (0:00:01.997) 0:01:20.175 ********** 2026-04-05 04:42:30.053448 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:42:30.053457 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:30.053465 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:30.053473 | orchestrator | skipping: [testbed-manager] 2026-04-05 04:42:30.053482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:42:30.053490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:30.053502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:30.053517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:42:30.053532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.103734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.103906 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:42:35.103929 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:42:35.103945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:42:35.103968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.103989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.104008 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:42:35.104027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:42:35.104098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.104122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.104141 | orchestrator | skipping: [testbed-node-3] 2026-04-05 04:42:35.104201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:42:35.104223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.104244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.104264 | orchestrator | skipping: [testbed-node-4] 2026-04-05 04:42:35.104282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-05 04:42:35.104302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.104343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:42:35.104356 | orchestrator | skipping: [testbed-node-5] 2026-04-05 04:42:35.104367 | orchestrator | 2026-04-05 04:42:35.104380 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 04:42:35.104392 | orchestrator | Sunday 05 April 2026 04:42:32 +0000 (0:00:03.271) 0:01:23.446 ********** 2026-04-05 04:42:35.104403 | orchestrator | 2026-04-05 04:42:35.104414 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 04:42:35.104425 | orchestrator | Sunday 05 April 2026 04:42:33 +0000 (0:00:00.463) 0:01:23.909 ********** 2026-04-05 04:42:35.104435 | orchestrator | 2026-04-05 04:42:35.104446 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 04:42:35.104457 | orchestrator | Sunday 05 April 2026 04:42:33 +0000 (0:00:00.476) 0:01:24.386 ********** 2026-04-05 04:42:35.104467 | orchestrator | 2026-04-05 04:42:35.104478 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 04:42:35.104488 | orchestrator | Sunday 05 April 2026 04:42:34 +0000 (0:00:00.431) 0:01:24.817 ********** 2026-04-05 04:42:35.104499 | orchestrator | 2026-04-05 04:42:35.104510 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 04:42:35.104520 | orchestrator | Sunday 05 April 2026 04:42:34 +0000 (0:00:00.481) 0:01:25.299 ********** 2026-04-05 04:42:35.104531 | orchestrator | 2026-04-05 04:42:35.104542 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 04:42:35.104553 | orchestrator | Sunday 05 April 2026 04:42:35 +0000 (0:00:00.418) 0:01:25.717 ********** 2026-04-05 04:42:35.104563 | orchestrator | 2026-04-05 04:42:35.104583 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-05 04:45:18.218925 | orchestrator | Sunday 05 April 2026 04:42:35 +0000 (0:00:00.462) 0:01:26.180 ********** 2026-04-05 04:45:18.219040 | orchestrator | 2026-04-05 04:45:18.219059 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-05 04:45:18.219071 | orchestrator | Sunday 05 April 2026 04:42:36 +0000 (0:00:00.835) 0:01:27.015 ********** 2026-04-05 04:45:18.219082 | orchestrator | changed: [testbed-manager] 2026-04-05 04:45:18.219094 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:45:18.219105 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:45:18.219116 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:45:18.219127 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:45:18.219137 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:45:18.219148 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:45:18.219159 | orchestrator | 2026-04-05 04:45:18.219171 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-05 04:45:18.219182 | orchestrator | Sunday 05 April 2026 04:43:50 +0000 (0:01:14.657) 0:02:41.673 ********** 2026-04-05 04:45:18.219192 | orchestrator | changed: [testbed-manager] 2026-04-05 04:45:18.219203 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:45:18.219214 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:45:18.219224 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:45:18.219235 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:45:18.219246 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:45:18.219256 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:45:18.219267 | orchestrator | 2026-04-05 04:45:18.219278 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-05 04:45:18.219315 | orchestrator | Sunday 05 April 2026 04:44:54 +0000 (0:01:03.917) 0:03:45.591 ********** 2026-04-05 04:45:18.219327 | orchestrator | ok: [testbed-manager] 2026-04-05 04:45:18.219339 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:18.219349 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:18.219360 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:18.219370 | orchestrator | ok: [testbed-node-3] 2026-04-05 04:45:18.219381 | orchestrator | ok: [testbed-node-4] 2026-04-05 04:45:18.219391 | orchestrator | ok: [testbed-node-5] 2026-04-05 04:45:18.219402 | orchestrator | 2026-04-05 04:45:18.219413 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-05 04:45:18.219424 | orchestrator | Sunday 05 April 2026 04:44:58 +0000 (0:00:03.591) 0:03:49.182 ********** 2026-04-05 04:45:18.219434 | orchestrator | changed: [testbed-manager] 2026-04-05 04:45:18.219445 | orchestrator | changed: [testbed-node-3] 2026-04-05 04:45:18.219456 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:45:18.219466 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:45:18.219477 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:45:18.219488 | orchestrator | changed: [testbed-node-4] 2026-04-05 04:45:18.219498 | orchestrator | changed: [testbed-node-5] 2026-04-05 04:45:18.219509 | orchestrator | 2026-04-05 04:45:18.219519 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:45:18.219531 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:45:18.219545 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:45:18.219556 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:45:18.219567 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:45:18.219578 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:45:18.219602 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:45:18.219614 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:45:18.219624 | orchestrator | 2026-04-05 04:45:18.219635 | orchestrator | 2026-04-05 04:45:18.219646 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:45:18.219681 | orchestrator | Sunday 05 April 2026 04:45:17 +0000 (0:00:19.422) 0:04:08.605 ********** 2026-04-05 04:45:18.219692 | orchestrator | =============================================================================== 2026-04-05 04:45:18.219703 | orchestrator | common : Restart fluentd container ------------------------------------- 74.66s 2026-04-05 04:45:18.219714 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 63.92s 2026-04-05 04:45:18.219724 | orchestrator | common : Restart cron container ---------------------------------------- 19.42s 2026-04-05 04:45:18.219735 | orchestrator | common : Copying over config.json files for services -------------------- 6.93s 2026-04-05 04:45:18.219746 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.64s 2026-04-05 04:45:18.219756 | orchestrator | service-check-containers : common | Check containers -------------------- 5.73s 2026-04-05 04:45:18.219767 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.83s 2026-04-05 04:45:18.219778 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.50s 2026-04-05 04:45:18.219789 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.65s 2026-04-05 04:45:18.219807 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.62s 2026-04-05 04:45:18.219818 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.59s 2026-04-05 04:45:18.219846 | orchestrator | common : Flush handlers ------------------------------------------------- 3.57s 2026-04-05 04:45:18.219858 | orchestrator | common : include_tasks -------------------------------------------------- 3.44s 2026-04-05 04:45:18.219869 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.36s 2026-04-05 04:45:18.219879 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.30s 2026-04-05 04:45:18.219890 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.28s 2026-04-05 04:45:18.219901 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.27s 2026-04-05 04:45:18.219911 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.09s 2026-04-05 04:45:18.219922 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.07s 2026-04-05 04:45:18.219933 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.80s 2026-04-05 04:45:18.412337 | orchestrator | + osism apply -a upgrade loadbalancer 2026-04-05 04:45:19.745751 | orchestrator | 2026-04-05 04:45:19 | INFO  | Prepare task for execution of loadbalancer. 2026-04-05 04:45:19.813378 | orchestrator | 2026-04-05 04:45:19 | INFO  | Task ab62d3eb-9ffc-41ec-ab3f-bbe2354c4339 (loadbalancer) was prepared for execution. 2026-04-05 04:45:19.813491 | orchestrator | 2026-04-05 04:45:19 | INFO  | It takes a moment until task ab62d3eb-9ffc-41ec-ab3f-bbe2354c4339 (loadbalancer) has been started and output is visible here. 2026-04-05 04:45:39.668338 | orchestrator | 2026-04-05 04:45:39.668454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:45:39.668472 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 04:45:39.668485 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 04:45:39.668508 | orchestrator | 2026-04-05 04:45:39.668519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:45:39.668530 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 04:45:39.668541 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 04:45:39.668563 | orchestrator | Sunday 05 April 2026 04:45:24 +0000 (0:00:01.363) 0:00:01.363 ********** 2026-04-05 04:45:39.668574 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:39.668586 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:39.668597 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:39.668607 | orchestrator | 2026-04-05 04:45:39.668618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:45:39.668694 | orchestrator | Sunday 05 April 2026 04:45:25 +0000 (0:00:01.030) 0:00:02.393 ********** 2026-04-05 04:45:39.668707 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-05 04:45:39.668718 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-05 04:45:39.668729 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-05 04:45:39.668740 | orchestrator | 2026-04-05 04:45:39.668750 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-05 04:45:39.668761 | orchestrator | 2026-04-05 04:45:39.668772 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 04:45:39.668782 | orchestrator | Sunday 05 April 2026 04:45:26 +0000 (0:00:00.739) 0:00:03.133 ********** 2026-04-05 04:45:39.668794 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:45:39.668829 | orchestrator | 2026-04-05 04:45:39.668840 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-04-05 04:45:39.668851 | orchestrator | Sunday 05 April 2026 04:45:27 +0000 (0:00:01.079) 0:00:04.213 ********** 2026-04-05 04:45:39.668862 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:39.668873 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:39.668886 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:39.668899 | orchestrator | 2026-04-05 04:45:39.668912 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-04-05 04:45:39.668924 | orchestrator | Sunday 05 April 2026 04:45:28 +0000 (0:00:01.373) 0:00:05.586 ********** 2026-04-05 04:45:39.668936 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:39.668949 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:39.668962 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:39.668974 | orchestrator | 2026-04-05 04:45:39.668987 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-05 04:45:39.669000 | orchestrator | Sunday 05 April 2026 04:45:30 +0000 (0:00:01.331) 0:00:06.918 ********** 2026-04-05 04:45:39.669011 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:39.669024 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:39.669036 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:39.669048 | orchestrator | 2026-04-05 04:45:39.669061 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 04:45:39.669073 | orchestrator | Sunday 05 April 2026 04:45:30 +0000 (0:00:00.756) 0:00:07.674 ********** 2026-04-05 04:45:39.669102 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:45:39.669115 | orchestrator | 2026-04-05 04:45:39.669129 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-05 04:45:39.669141 | orchestrator | Sunday 05 April 2026 04:45:31 +0000 (0:00:00.909) 0:00:08.584 ********** 2026-04-05 04:45:39.669154 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:39.669167 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:39.669179 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:39.669192 | orchestrator | 2026-04-05 04:45:39.669204 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-05 04:45:39.669218 | orchestrator | Sunday 05 April 2026 04:45:32 +0000 (0:00:00.637) 0:00:09.221 ********** 2026-04-05 04:45:39.669230 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 04:45:39.669241 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 04:45:39.669252 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-05 04:45:39.669263 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 04:45:39.669273 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 04:45:39.669284 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-05 04:45:39.669295 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 04:45:39.669307 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 04:45:39.669317 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-05 04:45:39.669328 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 04:45:39.669339 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 04:45:39.669367 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-05 04:45:39.669379 | orchestrator | 2026-04-05 04:45:39.669390 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 04:45:39.669400 | orchestrator | Sunday 05 April 2026 04:45:35 +0000 (0:00:03.288) 0:00:12.510 ********** 2026-04-05 04:45:39.669424 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-05 04:45:39.669436 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-05 04:45:39.669447 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-05 04:45:39.669457 | orchestrator | 2026-04-05 04:45:39.669468 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 04:45:39.669479 | orchestrator | Sunday 05 April 2026 04:45:36 +0000 (0:00:00.691) 0:00:13.201 ********** 2026-04-05 04:45:39.669490 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-05 04:45:39.669500 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-05 04:45:39.669511 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-05 04:45:39.669522 | orchestrator | 2026-04-05 04:45:39.669533 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 04:45:39.669543 | orchestrator | Sunday 05 April 2026 04:45:37 +0000 (0:00:01.115) 0:00:14.316 ********** 2026-04-05 04:45:39.669554 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-05 04:45:39.669565 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:45:39.669576 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-05 04:45:39.669586 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:45:39.669597 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-05 04:45:39.669608 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:45:39.669618 | orchestrator | 2026-04-05 04:45:39.669651 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-05 04:45:39.669663 | orchestrator | Sunday 05 April 2026 04:45:38 +0000 (0:00:01.079) 0:00:15.396 ********** 2026-04-05 04:45:39.669681 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:39.669699 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:39.669711 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:39.669722 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:39.669751 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:46.178963 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:46.179086 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:45:46.179104 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:45:46.179116 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:45:46.179128 | orchestrator | 2026-04-05 04:45:46.179140 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-05 04:45:46.179151 | orchestrator | Sunday 05 April 2026 04:45:40 +0000 (0:00:01.697) 0:00:17.094 ********** 2026-04-05 04:45:46.179161 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:46.179172 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:46.179181 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:46.179191 | orchestrator | 2026-04-05 04:45:46.179201 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-05 04:45:46.179210 | orchestrator | Sunday 05 April 2026 04:45:41 +0000 (0:00:01.353) 0:00:18.447 ********** 2026-04-05 04:45:46.179221 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-04-05 04:45:46.179231 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-04-05 04:45:46.179261 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-04-05 04:45:46.179271 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-04-05 04:45:46.179281 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-04-05 04:45:46.179290 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-04-05 04:45:46.179299 | orchestrator | 2026-04-05 04:45:46.179309 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-05 04:45:46.179319 | orchestrator | Sunday 05 April 2026 04:45:43 +0000 (0:00:01.490) 0:00:19.937 ********** 2026-04-05 04:45:46.179328 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:46.179365 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:46.179375 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:46.179384 | orchestrator | 2026-04-05 04:45:46.179394 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-05 04:45:46.179404 | orchestrator | Sunday 05 April 2026 04:45:44 +0000 (0:00:00.933) 0:00:20.871 ********** 2026-04-05 04:45:46.179413 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:45:46.179436 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:45:46.179445 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:45:46.179455 | orchestrator | 2026-04-05 04:45:46.179464 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-05 04:45:46.179474 | orchestrator | Sunday 05 April 2026 04:45:45 +0000 (0:00:01.444) 0:00:22.315 ********** 2026-04-05 04:45:46.179503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 04:45:46.179517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:45:46.179536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:45:46.179549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 04:45:46.179571 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:45:46.179584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 04:45:46.179596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:45:46.179607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:45:46.179649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 04:45:48.993796 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:45:48.993904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 04:45:48.993925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:45:48.993962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:45:48.993975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 04:45:48.993987 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:45:48.993999 | orchestrator | 2026-04-05 04:45:48.994011 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-05 04:45:48.994064 | orchestrator | Sunday 05 April 2026 04:45:46 +0000 (0:00:00.894) 0:00:23.209 ********** 2026-04-05 04:45:48.994077 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:48.994109 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:48.994128 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:48.994140 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:48.994160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:45:48.994172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 04:45:48.994183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:48.994195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:45:48.994215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 04:45:54.538372 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:54.538492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:45:54.538501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d', '__omit_place_holder__255488eb175279b343641c29fff4603b5b0bad7d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-05 04:45:54.538506 | orchestrator | 2026-04-05 04:45:54.538512 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-05 04:45:54.538518 | orchestrator | Sunday 05 April 2026 04:45:49 +0000 (0:00:02.733) 0:00:25.943 ********** 2026-04-05 04:45:54.538522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:54.538527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:54.538531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 04:45:54.538550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:54.538559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:54.538563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:45:54.538567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:45:54.538574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:45:54.538580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:45:54.538587 | orchestrator | 2026-04-05 04:45:54.538594 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-05 04:45:54.538600 | orchestrator | Sunday 05 April 2026 04:45:52 +0000 (0:00:03.487) 0:00:29.430 ********** 2026-04-05 04:45:54.538607 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 04:45:54.538685 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 04:45:54.538690 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-05 04:45:54.538698 | orchestrator | 2026-04-05 04:45:54.538702 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-05 04:45:54.538711 | orchestrator | Sunday 05 April 2026 04:45:54 +0000 (0:00:01.923) 0:00:31.354 ********** 2026-04-05 04:46:11.031830 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 04:46:11.031964 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 04:46:11.031981 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-05 04:46:11.031994 | orchestrator | 2026-04-05 04:46:11.032006 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-05 04:46:11.032017 | orchestrator | Sunday 05 April 2026 04:45:57 +0000 (0:00:03.382) 0:00:34.736 ********** 2026-04-05 04:46:11.032028 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:11.032041 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:11.032051 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:11.032062 | orchestrator | 2026-04-05 04:46:11.032073 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-05 04:46:11.032084 | orchestrator | Sunday 05 April 2026 04:45:58 +0000 (0:00:00.607) 0:00:35.344 ********** 2026-04-05 04:46:11.032096 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 04:46:11.032107 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 04:46:11.032119 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-05 04:46:11.032130 | orchestrator | 2026-04-05 04:46:11.032145 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-05 04:46:11.032166 | orchestrator | Sunday 05 April 2026 04:46:00 +0000 (0:00:02.104) 0:00:37.448 ********** 2026-04-05 04:46:11.032186 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 04:46:11.032207 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 04:46:11.032226 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-05 04:46:11.032246 | orchestrator | 2026-04-05 04:46:11.032265 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-05 04:46:11.032286 | orchestrator | Sunday 05 April 2026 04:46:02 +0000 (0:00:02.043) 0:00:39.492 ********** 2026-04-05 04:46:11.032306 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:46:11.032328 | orchestrator | 2026-04-05 04:46:11.032348 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-05 04:46:11.032368 | orchestrator | Sunday 05 April 2026 04:46:03 +0000 (0:00:00.960) 0:00:40.453 ********** 2026-04-05 04:46:11.032383 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-04-05 04:46:11.032397 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-04-05 04:46:11.032411 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-04-05 04:46:11.032424 | orchestrator | 2026-04-05 04:46:11.032436 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-05 04:46:11.032449 | orchestrator | Sunday 05 April 2026 04:46:05 +0000 (0:00:01.618) 0:00:42.072 ********** 2026-04-05 04:46:11.032463 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-05 04:46:11.032476 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-05 04:46:11.032489 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-05 04:46:11.032502 | orchestrator | 2026-04-05 04:46:11.032516 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-05 04:46:11.032554 | orchestrator | Sunday 05 April 2026 04:46:07 +0000 (0:00:01.817) 0:00:43.889 ********** 2026-04-05 04:46:11.032567 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:11.032580 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:11.032631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:11.032644 | orchestrator | 2026-04-05 04:46:11.032658 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-05 04:46:11.032671 | orchestrator | Sunday 05 April 2026 04:46:07 +0000 (0:00:00.331) 0:00:44.221 ********** 2026-04-05 04:46:11.032684 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:11.032697 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:11.032710 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:11.032722 | orchestrator | 2026-04-05 04:46:11.032736 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 04:46:11.032748 | orchestrator | Sunday 05 April 2026 04:46:08 +0000 (0:00:00.659) 0:00:44.881 ********** 2026-04-05 04:46:11.032762 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 04:46:11.032805 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 04:46:11.032819 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 04:46:11.032830 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:46:11.032842 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:46:11.032861 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:46:11.032873 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:46:11.032893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:46:12.393831 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:46:12.393965 | orchestrator | 2026-04-05 04:46:12.393995 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 04:46:12.394082 | orchestrator | Sunday 05 April 2026 04:46:11 +0000 (0:00:03.097) 0:00:47.979 ********** 2026-04-05 04:46:12.394100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 04:46:12.394115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:12.394152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:12.394165 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:12.394178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 04:46:12.394190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:12.394231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:12.394243 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:12.394257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 04:46:12.394271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:12.394293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:12.394306 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:12.394319 | orchestrator | 2026-04-05 04:46:12.394333 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 04:46:12.394346 | orchestrator | Sunday 05 April 2026 04:46:12 +0000 (0:00:00.887) 0:00:48.866 ********** 2026-04-05 04:46:12.394359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 04:46:12.394372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:12.394393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:19.337574 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:19.337748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 04:46:19.337769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:19.337808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:19.337821 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:19.337833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 04:46:19.337845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:19.338384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:19.338427 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:19.338448 | orchestrator | 2026-04-05 04:46:19.338468 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-05 04:46:19.338490 | orchestrator | Sunday 05 April 2026 04:46:12 +0000 (0:00:00.846) 0:00:49.713 ********** 2026-04-05 04:46:19.338509 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 04:46:19.338556 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 04:46:19.338574 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-05 04:46:19.338627 | orchestrator | 2026-04-05 04:46:19.338647 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-05 04:46:19.338664 | orchestrator | Sunday 05 April 2026 04:46:14 +0000 (0:00:01.655) 0:00:51.369 ********** 2026-04-05 04:46:19.338683 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 04:46:19.338701 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 04:46:19.338737 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-05 04:46:19.338758 | orchestrator | 2026-04-05 04:46:19.338778 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-05 04:46:19.338796 | orchestrator | Sunday 05 April 2026 04:46:16 +0000 (0:00:01.723) 0:00:53.092 ********** 2026-04-05 04:46:19.338815 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 04:46:19.338832 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 04:46:19.338851 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 04:46:19.338871 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 04:46:19.338890 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:19.338909 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 04:46:19.338928 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:19.338946 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 04:46:19.338964 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:19.338982 | orchestrator | 2026-04-05 04:46:19.339001 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-05 04:46:19.339020 | orchestrator | Sunday 05 April 2026 04:46:17 +0000 (0:00:01.206) 0:00:54.299 ********** 2026-04-05 04:46:19.339041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 04:46:19.339062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 04:46:19.339093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 04:46:19.339128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:46:21.573855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:46:21.573978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:46:21.573995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:46:21.574009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:46:21.574089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:46:21.574103 | orchestrator | 2026-04-05 04:46:21.574117 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-05 04:46:21.574130 | orchestrator | Sunday 05 April 2026 04:46:20 +0000 (0:00:03.006) 0:00:57.306 ********** 2026-04-05 04:46:21.574142 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 04:46:21.574154 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:46:21.574165 | orchestrator | } 2026-04-05 04:46:21.574176 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 04:46:21.574187 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:46:21.574197 | orchestrator | } 2026-04-05 04:46:21.574208 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 04:46:21.574261 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:46:21.574273 | orchestrator | } 2026-04-05 04:46:21.574285 | orchestrator | 2026-04-05 04:46:21.574296 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 04:46:21.574306 | orchestrator | Sunday 05 April 2026 04:46:21 +0000 (0:00:00.560) 0:00:57.866 ********** 2026-04-05 04:46:21.574336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 04:46:21.574349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:21.574360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:21.574456 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:21.574470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 04:46:21.574485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:21.574498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:21.574519 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:21.574539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 04:46:21.574563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:46:27.148254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:46:27.148370 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:27.148386 | orchestrator | 2026-04-05 04:46:27.148397 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-05 04:46:27.148407 | orchestrator | Sunday 05 April 2026 04:46:22 +0000 (0:00:01.054) 0:00:58.921 ********** 2026-04-05 04:46:27.148416 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:46:27.148425 | orchestrator | 2026-04-05 04:46:27.149222 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-05 04:46:27.149240 | orchestrator | Sunday 05 April 2026 04:46:23 +0000 (0:00:01.256) 0:01:00.178 ********** 2026-04-05 04:46:27.149254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:27.149267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 04:46:27.149310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:27.149340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:27.149351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 04:46:27.149360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 04:46:27.149370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:27.149379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 04:46:27.149399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:27.149416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 04:46:28.063483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:28.063639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 04:46:28.063658 | orchestrator | 2026-04-05 04:46:28.063672 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-05 04:46:28.063684 | orchestrator | Sunday 05 April 2026 04:46:27 +0000 (0:00:03.918) 0:01:04.097 ********** 2026-04-05 04:46:28.063699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:28.063764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 04:46:28.063777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:28.063809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 04:46:28.063821 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:28.063834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:28.063846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 04:46:28.063865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:28.063877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 04:46:28.063888 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:28.063899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:28.063919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 04:46:37.193248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:37.193380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 04:46:37.193415 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:37.193429 | orchestrator | 2026-04-05 04:46:37.193440 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-05 04:46:37.193461 | orchestrator | Sunday 05 April 2026 04:46:28 +0000 (0:00:01.058) 0:01:05.156 ********** 2026-04-05 04:46:37.193472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:37.193486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:37.193498 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:37.193508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:37.193523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:37.193534 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:37.193544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:37.193554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:37.193623 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:37.193635 | orchestrator | 2026-04-05 04:46:37.193645 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-05 04:46:37.193655 | orchestrator | Sunday 05 April 2026 04:46:29 +0000 (0:00:01.352) 0:01:06.508 ********** 2026-04-05 04:46:37.193665 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:46:37.193676 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:46:37.193686 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:46:37.193695 | orchestrator | 2026-04-05 04:46:37.193705 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-05 04:46:37.193715 | orchestrator | Sunday 05 April 2026 04:46:30 +0000 (0:00:01.240) 0:01:07.749 ********** 2026-04-05 04:46:37.193728 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:46:37.193739 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:46:37.193749 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:46:37.193760 | orchestrator | 2026-04-05 04:46:37.193771 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-05 04:46:37.193782 | orchestrator | Sunday 05 April 2026 04:46:33 +0000 (0:00:02.075) 0:01:09.824 ********** 2026-04-05 04:46:37.193794 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:46:37.193804 | orchestrator | 2026-04-05 04:46:37.193816 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-05 04:46:37.193827 | orchestrator | Sunday 05 April 2026 04:46:33 +0000 (0:00:00.850) 0:01:10.675 ********** 2026-04-05 04:46:37.193861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:37.193886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:37.193896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:46:37.193913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:37.193924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:37.193962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:46:38.083834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:38.083937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:38.083970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:46:38.083984 | orchestrator | 2026-04-05 04:46:38.083997 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-05 04:46:38.084009 | orchestrator | Sunday 05 April 2026 04:46:37 +0000 (0:00:03.498) 0:01:14.174 ********** 2026-04-05 04:46:38.084022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:38.084076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:38.084090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:46:38.084102 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:38.084120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:38.084133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:38.084145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:46:38.084163 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:38.084183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:48.103937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 04:46:48.104054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:46:48.104071 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:48.104086 | orchestrator | 2026-04-05 04:46:48.104098 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-05 04:46:48.104111 | orchestrator | Sunday 05 April 2026 04:46:38 +0000 (0:00:01.029) 0:01:15.203 ********** 2026-04-05 04:46:48.104139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:48.104154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:48.104173 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:48.104185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:48.104196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:48.104230 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:48.104242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:48.104254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:48.104265 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:48.104276 | orchestrator | 2026-04-05 04:46:48.104287 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-05 04:46:48.104298 | orchestrator | Sunday 05 April 2026 04:46:39 +0000 (0:00:00.882) 0:01:16.086 ********** 2026-04-05 04:46:48.104309 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:46:48.104320 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:46:48.104331 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:46:48.104341 | orchestrator | 2026-04-05 04:46:48.104352 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-05 04:46:48.104363 | orchestrator | Sunday 05 April 2026 04:46:40 +0000 (0:00:01.198) 0:01:17.284 ********** 2026-04-05 04:46:48.104374 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:46:48.104384 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:46:48.104395 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:46:48.104405 | orchestrator | 2026-04-05 04:46:48.104416 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-05 04:46:48.104427 | orchestrator | Sunday 05 April 2026 04:46:42 +0000 (0:00:02.087) 0:01:19.372 ********** 2026-04-05 04:46:48.104438 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:48.104449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:48.104460 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:48.104472 | orchestrator | 2026-04-05 04:46:48.104485 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-05 04:46:48.104516 | orchestrator | Sunday 05 April 2026 04:46:43 +0000 (0:00:00.527) 0:01:19.899 ********** 2026-04-05 04:46:48.104529 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:46:48.104542 | orchestrator | 2026-04-05 04:46:48.104696 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-05 04:46:48.104710 | orchestrator | Sunday 05 April 2026 04:46:43 +0000 (0:00:00.690) 0:01:20.589 ********** 2026-04-05 04:46:48.104727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 04:46:48.104750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 04:46:48.104774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-05 04:46:48.104789 | orchestrator | 2026-04-05 04:46:48.104802 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-05 04:46:48.104815 | orchestrator | Sunday 05 April 2026 04:46:46 +0000 (0:00:03.064) 0:01:23.654 ********** 2026-04-05 04:46:48.104828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 04:46:48.104840 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:48.104863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 04:46:56.575258 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:56.575378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-05 04:46:56.575443 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:56.575465 | orchestrator | 2026-04-05 04:46:56.575483 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-05 04:46:56.575513 | orchestrator | Sunday 05 April 2026 04:46:48 +0000 (0:00:01.543) 0:01:25.198 ********** 2026-04-05 04:46:56.575526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 04:46:56.575539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 04:46:56.575624 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:56.575636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 04:46:56.575648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 04:46:56.575659 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:56.575670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 04:46:56.575681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-05 04:46:56.575692 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:56.575703 | orchestrator | 2026-04-05 04:46:56.575714 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-05 04:46:56.575725 | orchestrator | Sunday 05 April 2026 04:46:50 +0000 (0:00:01.750) 0:01:26.948 ********** 2026-04-05 04:46:56.575769 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:56.575784 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:56.575796 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:56.575808 | orchestrator | 2026-04-05 04:46:56.575821 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-05 04:46:56.575864 | orchestrator | Sunday 05 April 2026 04:46:50 +0000 (0:00:00.729) 0:01:27.677 ********** 2026-04-05 04:46:56.575877 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:56.575889 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:56.575901 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:56.575914 | orchestrator | 2026-04-05 04:46:56.575927 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-05 04:46:56.575940 | orchestrator | Sunday 05 April 2026 04:46:52 +0000 (0:00:01.310) 0:01:28.987 ********** 2026-04-05 04:46:56.575952 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:46:56.575965 | orchestrator | 2026-04-05 04:46:56.575977 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-05 04:46:56.575990 | orchestrator | Sunday 05 April 2026 04:46:52 +0000 (0:00:00.758) 0:01:29.746 ********** 2026-04-05 04:46:56.576012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:56.576029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:46:56.576045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 04:46:56.576060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 04:46:56.576102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:57.746166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:46:57.746328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746389 | orchestrator | 2026-04-05 04:46:57.746401 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-05 04:46:57.746412 | orchestrator | Sunday 05 April 2026 04:46:57 +0000 (0:00:04.393) 0:01:34.140 ********** 2026-04-05 04:46:57.746423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:57.746434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 04:46:57.746468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 04:46:58.682626 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:58.682738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:58.682758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:46:58.682772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 04:46:58.682806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 04:46:58.682819 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:58.682857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:46:58.682871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:46:58.682883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 04:46:58.682894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 04:46:58.682912 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:46:58.682923 | orchestrator | 2026-04-05 04:46:58.682935 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-05 04:46:58.682947 | orchestrator | Sunday 05 April 2026 04:46:58 +0000 (0:00:00.755) 0:01:34.895 ********** 2026-04-05 04:46:58.682958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:58.682972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:58.682984 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:46:58.682995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:58.683006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:58.683017 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:46:58.683027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:46:58.683052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:07.951643 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:07.951758 | orchestrator | 2026-04-05 04:47:07.951774 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-05 04:47:07.951788 | orchestrator | Sunday 05 April 2026 04:46:58 +0000 (0:00:00.921) 0:01:35.816 ********** 2026-04-05 04:47:07.951800 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:07.951812 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:07.951822 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:07.951833 | orchestrator | 2026-04-05 04:47:07.951845 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-05 04:47:07.951856 | orchestrator | Sunday 05 April 2026 04:47:00 +0000 (0:00:01.524) 0:01:37.341 ********** 2026-04-05 04:47:07.951867 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:07.951878 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:07.951888 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:07.951899 | orchestrator | 2026-04-05 04:47:07.951910 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-05 04:47:07.951921 | orchestrator | Sunday 05 April 2026 04:47:02 +0000 (0:00:02.126) 0:01:39.467 ********** 2026-04-05 04:47:07.951932 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:07.951943 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:07.951954 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:07.951965 | orchestrator | 2026-04-05 04:47:07.951976 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-05 04:47:07.952008 | orchestrator | Sunday 05 April 2026 04:47:02 +0000 (0:00:00.324) 0:01:39.792 ********** 2026-04-05 04:47:07.952019 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:07.952030 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:07.952041 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:07.952051 | orchestrator | 2026-04-05 04:47:07.952063 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-05 04:47:07.952073 | orchestrator | Sunday 05 April 2026 04:47:03 +0000 (0:00:00.314) 0:01:40.106 ********** 2026-04-05 04:47:07.952084 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:47:07.952095 | orchestrator | 2026-04-05 04:47:07.952106 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-05 04:47:07.952117 | orchestrator | Sunday 05 April 2026 04:47:04 +0000 (0:00:01.035) 0:01:41.141 ********** 2026-04-05 04:47:07.952133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:47:07.952150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 04:47:07.952163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 04:47:07.952200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 04:47:07.952213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 04:47:07.952233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:47:07.952245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 04:47:07.952256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:47:07.952269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 04:47:07.952293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.764893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:47:08.765033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 04:47:08.765130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.765188 | orchestrator | 2026-04-05 04:47:08.765201 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-05 04:47:08.765214 | orchestrator | Sunday 05 April 2026 04:47:08 +0000 (0:00:03.967) 0:01:45.109 ********** 2026-04-05 04:47:08.765239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:47:08.965316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 04:47:08.965413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965499 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:08.965559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:47:08.965579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 04:47:08.965632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:47:08.965702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 04:47:19.514852 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:19.514973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:47:19.514995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 04:47:19.515011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 04:47:19.515070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 04:47:19.515086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 04:47:19.515118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:47:19.515131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-05 04:47:19.515143 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:19.515156 | orchestrator | 2026-04-05 04:47:19.515169 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-05 04:47:19.515184 | orchestrator | Sunday 05 April 2026 04:47:09 +0000 (0:00:01.060) 0:01:46.170 ********** 2026-04-05 04:47:19.515197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:19.515213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:19.515227 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:19.515239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:19.515262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:19.515275 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:19.515287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:19.515300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:19.515312 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:19.515323 | orchestrator | 2026-04-05 04:47:19.515335 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-05 04:47:19.515347 | orchestrator | Sunday 05 April 2026 04:47:10 +0000 (0:00:01.380) 0:01:47.550 ********** 2026-04-05 04:47:19.515366 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:19.515381 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:19.515393 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:19.515404 | orchestrator | 2026-04-05 04:47:19.515417 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-05 04:47:19.515429 | orchestrator | Sunday 05 April 2026 04:47:11 +0000 (0:00:01.159) 0:01:48.710 ********** 2026-04-05 04:47:19.515441 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:19.515453 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:19.515467 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:19.515479 | orchestrator | 2026-04-05 04:47:19.515492 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-05 04:47:19.515504 | orchestrator | Sunday 05 April 2026 04:47:13 +0000 (0:00:02.048) 0:01:50.758 ********** 2026-04-05 04:47:19.515516 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:19.515558 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:19.515570 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:19.515583 | orchestrator | 2026-04-05 04:47:19.515596 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-05 04:47:19.515608 | orchestrator | Sunday 05 April 2026 04:47:14 +0000 (0:00:00.555) 0:01:51.314 ********** 2026-04-05 04:47:19.515620 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:47:19.515629 | orchestrator | 2026-04-05 04:47:19.515636 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-05 04:47:19.515643 | orchestrator | Sunday 05 April 2026 04:47:15 +0000 (0:00:00.866) 0:01:52.181 ********** 2026-04-05 04:47:19.515666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 04:47:19.782814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 04:47:19.782943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 04:47:19.783038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 04:47:19.783065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 04:47:19.783101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 04:47:23.243979 | orchestrator | 2026-04-05 04:47:23.244078 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-05 04:47:23.244108 | orchestrator | Sunday 05 April 2026 04:47:19 +0000 (0:00:04.509) 0:01:56.691 ********** 2026-04-05 04:47:23.244125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 04:47:23.244141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 04:47:23.244173 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:23.244209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 04:47:23.244222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 04:47:23.244239 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:23.244262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 04:47:35.381117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-05 04:47:35.381262 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:35.381281 | orchestrator | 2026-04-05 04:47:35.381294 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-05 04:47:35.381307 | orchestrator | Sunday 05 April 2026 04:47:23 +0000 (0:00:03.474) 0:02:00.165 ********** 2026-04-05 04:47:35.381320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 04:47:35.381348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 04:47:35.381362 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:35.381375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 04:47:35.381406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 04:47:35.381418 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:35.381443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 04:47:35.381456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-05 04:47:35.381468 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:35.381480 | orchestrator | 2026-04-05 04:47:35.381492 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-05 04:47:35.381547 | orchestrator | Sunday 05 April 2026 04:47:27 +0000 (0:00:04.156) 0:02:04.322 ********** 2026-04-05 04:47:35.381559 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:35.381586 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:35.381597 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:35.381608 | orchestrator | 2026-04-05 04:47:35.381619 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-05 04:47:35.381632 | orchestrator | Sunday 05 April 2026 04:47:28 +0000 (0:00:01.278) 0:02:05.601 ********** 2026-04-05 04:47:35.381645 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:35.381657 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:35.381669 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:35.381681 | orchestrator | 2026-04-05 04:47:35.381693 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-05 04:47:35.381706 | orchestrator | Sunday 05 April 2026 04:47:30 +0000 (0:00:02.089) 0:02:07.691 ********** 2026-04-05 04:47:35.381719 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:35.381732 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:35.381744 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:35.381757 | orchestrator | 2026-04-05 04:47:35.381769 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-05 04:47:35.381781 | orchestrator | Sunday 05 April 2026 04:47:31 +0000 (0:00:00.382) 0:02:08.073 ********** 2026-04-05 04:47:35.381793 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:47:35.381805 | orchestrator | 2026-04-05 04:47:35.381818 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-05 04:47:35.381832 | orchestrator | Sunday 05 April 2026 04:47:32 +0000 (0:00:01.098) 0:02:09.172 ********** 2026-04-05 04:47:35.381852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:47:35.381875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:47:45.079289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:47:45.079393 | orchestrator | 2026-04-05 04:47:45.079410 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-05 04:47:45.079422 | orchestrator | Sunday 05 April 2026 04:47:35 +0000 (0:00:03.119) 0:02:12.292 ********** 2026-04-05 04:47:45.079434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:47:45.079445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:47:45.079456 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:45.079467 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:45.079477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:47:45.079561 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:45.079573 | orchestrator | 2026-04-05 04:47:45.079584 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-05 04:47:45.079593 | orchestrator | Sunday 05 April 2026 04:47:35 +0000 (0:00:00.409) 0:02:12.701 ********** 2026-04-05 04:47:45.079604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:45.079618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:45.079646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:45.079656 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:45.079666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:45.079676 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:45.079723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:45.079735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:47:45.079745 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:45.079755 | orchestrator | 2026-04-05 04:47:45.079765 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-05 04:47:45.079775 | orchestrator | Sunday 05 April 2026 04:47:36 +0000 (0:00:00.883) 0:02:13.584 ********** 2026-04-05 04:47:45.079784 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:45.079795 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:45.079804 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:45.079814 | orchestrator | 2026-04-05 04:47:45.079824 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-05 04:47:45.079836 | orchestrator | Sunday 05 April 2026 04:47:38 +0000 (0:00:01.243) 0:02:14.828 ********** 2026-04-05 04:47:45.079847 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:45.079860 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:45.079871 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:45.079882 | orchestrator | 2026-04-05 04:47:45.079894 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-05 04:47:45.079905 | orchestrator | Sunday 05 April 2026 04:47:40 +0000 (0:00:02.115) 0:02:16.943 ********** 2026-04-05 04:47:45.079916 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:45.079928 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:45.079940 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:45.079951 | orchestrator | 2026-04-05 04:47:45.079962 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-05 04:47:45.079974 | orchestrator | Sunday 05 April 2026 04:47:40 +0000 (0:00:00.420) 0:02:17.364 ********** 2026-04-05 04:47:45.079986 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:47:45.079996 | orchestrator | 2026-04-05 04:47:45.080006 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-05 04:47:45.080023 | orchestrator | Sunday 05 April 2026 04:47:41 +0000 (0:00:01.132) 0:02:18.497 ********** 2026-04-05 04:47:45.080051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 04:47:45.903538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 04:47:45.903734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 04:47:45.903759 | orchestrator | 2026-04-05 04:47:45.903775 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-05 04:47:45.903789 | orchestrator | Sunday 05 April 2026 04:47:45 +0000 (0:00:03.727) 0:02:22.225 ********** 2026-04-05 04:47:45.903852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 04:47:45.903879 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:45.903904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 04:47:51.745076 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:51.745197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 04:47:51.745232 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:51.745243 | orchestrator | 2026-04-05 04:47:51.745253 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-05 04:47:51.745263 | orchestrator | Sunday 05 April 2026 04:47:46 +0000 (0:00:00.917) 0:02:23.143 ********** 2026-04-05 04:47:51.745273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 04:47:51.745285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 04:47:51.745296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 04:47:51.745306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 04:47:51.745315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 04:47:51.745340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 04:47:51.745357 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:51.745366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 04:47:51.745375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 04:47:51.745392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 04:47:51.745408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 04:47:51.745430 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:51.745447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 04:47:51.745463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 04:47:51.745478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-05 04:47:51.745545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-05 04:47:51.745592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-05 04:47:51.745608 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:51.745621 | orchestrator | 2026-04-05 04:47:51.745636 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-05 04:47:51.745651 | orchestrator | Sunday 05 April 2026 04:47:47 +0000 (0:00:01.359) 0:02:24.502 ********** 2026-04-05 04:47:51.745667 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:51.745683 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:51.745693 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:51.745701 | orchestrator | 2026-04-05 04:47:51.745710 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-05 04:47:51.745718 | orchestrator | Sunday 05 April 2026 04:47:48 +0000 (0:00:01.219) 0:02:25.721 ********** 2026-04-05 04:47:51.745727 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:47:51.745744 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:47:51.745753 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:47:51.745761 | orchestrator | 2026-04-05 04:47:51.745770 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-05 04:47:51.745778 | orchestrator | Sunday 05 April 2026 04:47:51 +0000 (0:00:02.119) 0:02:27.841 ********** 2026-04-05 04:47:51.745787 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:51.745795 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:51.745804 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:51.745813 | orchestrator | 2026-04-05 04:47:51.745821 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-05 04:47:51.745829 | orchestrator | Sunday 05 April 2026 04:47:51 +0000 (0:00:00.602) 0:02:28.444 ********** 2026-04-05 04:47:51.745848 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:57.355176 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:57.355309 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:57.355327 | orchestrator | 2026-04-05 04:47:57.355339 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-05 04:47:57.355352 | orchestrator | Sunday 05 April 2026 04:47:51 +0000 (0:00:00.360) 0:02:28.804 ********** 2026-04-05 04:47:57.355380 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:47:57.355392 | orchestrator | 2026-04-05 04:47:57.355401 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-05 04:47:57.355408 | orchestrator | Sunday 05 April 2026 04:47:52 +0000 (0:00:01.001) 0:02:29.805 ********** 2026-04-05 04:47:57.355432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 04:47:57.355445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 04:47:57.355452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 04:47:57.355478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 04:47:57.355527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 04:47:57.355535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 04:47:57.355546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 04:47:57.355553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 04:47:57.355565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 04:47:57.355571 | orchestrator | 2026-04-05 04:47:57.355578 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-05 04:47:57.355585 | orchestrator | Sunday 05 April 2026 04:47:57 +0000 (0:00:04.073) 0:02:33.879 ********** 2026-04-05 04:47:57.355597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 04:47:58.878902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 04:47:58.879001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 04:47:58.879012 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:58.879022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 04:47:58.879049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 04:47:58.879056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 04:47:58.879062 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:58.879083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 04:47:58.879094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 04:47:58.879101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 04:47:58.879113 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:58.879119 | orchestrator | 2026-04-05 04:47:58.879127 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-05 04:47:58.879135 | orchestrator | Sunday 05 April 2026 04:47:57 +0000 (0:00:00.678) 0:02:34.557 ********** 2026-04-05 04:47:58.879143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 04:47:58.879152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 04:47:58.879160 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:47:58.879167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 04:47:58.879173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 04:47:58.879180 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:47:58.879186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 04:47:58.879193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-05 04:47:58.879199 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:47:58.879205 | orchestrator | 2026-04-05 04:47:58.879212 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-05 04:47:58.879222 | orchestrator | Sunday 05 April 2026 04:47:58 +0000 (0:00:01.135) 0:02:35.692 ********** 2026-04-05 04:48:08.098637 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:48:08.098775 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:48:08.098803 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:48:08.098830 | orchestrator | 2026-04-05 04:48:08.098849 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-05 04:48:08.098866 | orchestrator | Sunday 05 April 2026 04:48:00 +0000 (0:00:01.280) 0:02:36.973 ********** 2026-04-05 04:48:08.098883 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:48:08.098898 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:48:08.098914 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:48:08.098929 | orchestrator | 2026-04-05 04:48:08.098946 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-05 04:48:08.098962 | orchestrator | Sunday 05 April 2026 04:48:02 +0000 (0:00:02.112) 0:02:39.086 ********** 2026-04-05 04:48:08.098978 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:08.098998 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:08.099015 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:08.099060 | orchestrator | 2026-04-05 04:48:08.099072 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-05 04:48:08.099096 | orchestrator | Sunday 05 April 2026 04:48:02 +0000 (0:00:00.327) 0:02:39.414 ********** 2026-04-05 04:48:08.099106 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:48:08.099116 | orchestrator | 2026-04-05 04:48:08.099125 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-05 04:48:08.099135 | orchestrator | Sunday 05 April 2026 04:48:03 +0000 (0:00:01.235) 0:02:40.649 ********** 2026-04-05 04:48:08.099152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:08.099176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:48:08.099197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:08.099231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:48:08.099259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:08.099272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:48:08.099284 | orchestrator | 2026-04-05 04:48:08.099300 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-05 04:48:08.099318 | orchestrator | Sunday 05 April 2026 04:48:07 +0000 (0:00:03.896) 0:02:44.545 ********** 2026-04-05 04:48:08.099347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:08.099376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.248748 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:18.248845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:18.248863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.248873 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:18.248919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:18.248930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.248939 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:18.248947 | orchestrator | 2026-04-05 04:48:18.248956 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-05 04:48:18.248984 | orchestrator | Sunday 05 April 2026 04:48:08 +0000 (0:00:00.764) 0:02:45.310 ********** 2026-04-05 04:48:18.249012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:18.249025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:18.249035 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:18.249043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:18.249055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:18.249063 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:18.249071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:18.249079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:18.249087 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:18.249094 | orchestrator | 2026-04-05 04:48:18.249102 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-05 04:48:18.249110 | orchestrator | Sunday 05 April 2026 04:48:10 +0000 (0:00:01.640) 0:02:46.950 ********** 2026-04-05 04:48:18.249118 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:48:18.249126 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:48:18.249134 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:48:18.249142 | orchestrator | 2026-04-05 04:48:18.249149 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-05 04:48:18.249157 | orchestrator | Sunday 05 April 2026 04:48:11 +0000 (0:00:01.300) 0:02:48.251 ********** 2026-04-05 04:48:18.249164 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:48:18.249172 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:48:18.249179 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:48:18.249187 | orchestrator | 2026-04-05 04:48:18.249195 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-05 04:48:18.249202 | orchestrator | Sunday 05 April 2026 04:48:13 +0000 (0:00:02.111) 0:02:50.362 ********** 2026-04-05 04:48:18.249210 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:48:18.249218 | orchestrator | 2026-04-05 04:48:18.249226 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-05 04:48:18.249234 | orchestrator | Sunday 05 April 2026 04:48:14 +0000 (0:00:01.322) 0:02:51.685 ********** 2026-04-05 04:48:18.249243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:18.249261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.249275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.989849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.989956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:18.989973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.990009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.990076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.990117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:18.990131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.990142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.990153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 04:48:18.990173 | orchestrator | 2026-04-05 04:48:18.990187 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-05 04:48:18.990199 | orchestrator | Sunday 05 April 2026 04:48:18 +0000 (0:00:03.754) 0:02:55.439 ********** 2026-04-05 04:48:18.990213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:18.990231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.183810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.183911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.183933 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:20.183953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:20.183987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.183997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.184028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.184038 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:20.184048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:20.184057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.184072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.184081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 04:48:20.184090 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:20.184099 | orchestrator | 2026-04-05 04:48:20.184109 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-05 04:48:20.184118 | orchestrator | Sunday 05 April 2026 04:48:19 +0000 (0:00:00.719) 0:02:56.159 ********** 2026-04-05 04:48:20.184128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:20.184140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:20.184151 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:20.184159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:20.184178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:31.565391 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:31.565621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:31.565699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:48:31.565719 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:31.565736 | orchestrator | 2026-04-05 04:48:31.565755 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-05 04:48:31.565774 | orchestrator | Sunday 05 April 2026 04:48:20 +0000 (0:00:01.319) 0:02:57.479 ********** 2026-04-05 04:48:31.565790 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:48:31.565808 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:48:31.565854 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:48:31.565870 | orchestrator | 2026-04-05 04:48:31.565887 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-05 04:48:31.565903 | orchestrator | Sunday 05 April 2026 04:48:21 +0000 (0:00:01.208) 0:02:58.688 ********** 2026-04-05 04:48:31.565919 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:48:31.565936 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:48:31.565951 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:48:31.565968 | orchestrator | 2026-04-05 04:48:31.565984 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-05 04:48:31.566000 | orchestrator | Sunday 05 April 2026 04:48:23 +0000 (0:00:02.125) 0:03:00.813 ********** 2026-04-05 04:48:31.566081 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:48:31.566097 | orchestrator | 2026-04-05 04:48:31.566110 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-05 04:48:31.566124 | orchestrator | Sunday 05 April 2026 04:48:25 +0000 (0:00:01.756) 0:03:02.569 ********** 2026-04-05 04:48:31.566137 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 04:48:31.566149 | orchestrator | 2026-04-05 04:48:31.566162 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-05 04:48:31.566176 | orchestrator | Sunday 05 April 2026 04:48:28 +0000 (0:00:03.233) 0:03:05.802 ********** 2026-04-05 04:48:31.566194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:48:31.566251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 04:48:31.566267 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:31.566282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:48:31.566308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 04:48:31.566322 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:31.566352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:48:34.121407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 04:48:34.121607 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:34.121635 | orchestrator | 2026-04-05 04:48:34.121654 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-05 04:48:34.121675 | orchestrator | Sunday 05 April 2026 04:48:31 +0000 (0:00:02.683) 0:03:08.486 ********** 2026-04-05 04:48:34.121697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:48:34.121721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 04:48:34.121740 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:34.121810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:48:34.121863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 04:48:34.121884 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:34.121910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:48:34.121956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-05 04:48:44.412889 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:44.412996 | orchestrator | 2026-04-05 04:48:44.413012 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-05 04:48:44.413025 | orchestrator | Sunday 05 April 2026 04:48:34 +0000 (0:00:02.809) 0:03:11.296 ********** 2026-04-05 04:48:44.413037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 04:48:44.413053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 04:48:44.413063 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:44.413074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 04:48:44.413085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 04:48:44.413095 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:44.413105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 04:48:44.413150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-05 04:48:44.413161 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:44.413171 | orchestrator | 2026-04-05 04:48:44.413181 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-05 04:48:44.413191 | orchestrator | Sunday 05 April 2026 04:48:37 +0000 (0:00:02.682) 0:03:13.978 ********** 2026-04-05 04:48:44.413201 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:48:44.413226 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:48:44.413236 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:48:44.413246 | orchestrator | 2026-04-05 04:48:44.413255 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-05 04:48:44.413265 | orchestrator | Sunday 05 April 2026 04:48:39 +0000 (0:00:01.998) 0:03:15.977 ********** 2026-04-05 04:48:44.413275 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:44.413285 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:44.413294 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:44.413304 | orchestrator | 2026-04-05 04:48:44.413313 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-05 04:48:44.413323 | orchestrator | Sunday 05 April 2026 04:48:40 +0000 (0:00:01.640) 0:03:17.617 ********** 2026-04-05 04:48:44.413333 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:44.413343 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:44.413352 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:44.413361 | orchestrator | 2026-04-05 04:48:44.413371 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-05 04:48:44.413380 | orchestrator | Sunday 05 April 2026 04:48:41 +0000 (0:00:00.605) 0:03:18.223 ********** 2026-04-05 04:48:44.413390 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:48:44.413399 | orchestrator | 2026-04-05 04:48:44.413409 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-05 04:48:44.413418 | orchestrator | Sunday 05 April 2026 04:48:42 +0000 (0:00:01.149) 0:03:19.372 ********** 2026-04-05 04:48:44.413429 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 04:48:44.413476 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 04:48:44.413495 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 04:48:44.413506 | orchestrator | 2026-04-05 04:48:44.413520 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-05 04:48:44.413531 | orchestrator | Sunday 05 April 2026 04:48:44 +0000 (0:00:01.741) 0:03:21.114 ********** 2026-04-05 04:48:44.413548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 04:48:53.556930 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:53.557028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 04:48:53.557040 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:53.557047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 04:48:53.557054 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:53.557080 | orchestrator | 2026-04-05 04:48:53.557087 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-05 04:48:53.557095 | orchestrator | Sunday 05 April 2026 04:48:44 +0000 (0:00:00.520) 0:03:21.635 ********** 2026-04-05 04:48:53.557102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 04:48:53.557110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 04:48:53.557115 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:53.557121 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:53.557127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-05 04:48:53.557132 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:53.557138 | orchestrator | 2026-04-05 04:48:53.557144 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-05 04:48:53.557149 | orchestrator | Sunday 05 April 2026 04:48:45 +0000 (0:00:00.669) 0:03:22.305 ********** 2026-04-05 04:48:53.557155 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:53.557160 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:53.557166 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:53.557171 | orchestrator | 2026-04-05 04:48:53.557176 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-05 04:48:53.557195 | orchestrator | Sunday 05 April 2026 04:48:46 +0000 (0:00:00.801) 0:03:23.106 ********** 2026-04-05 04:48:53.557202 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:53.557208 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:53.557214 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:53.557220 | orchestrator | 2026-04-05 04:48:53.557226 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-05 04:48:53.557231 | orchestrator | Sunday 05 April 2026 04:48:47 +0000 (0:00:01.370) 0:03:24.477 ********** 2026-04-05 04:48:53.557236 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:53.557242 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:53.557248 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:48:53.557253 | orchestrator | 2026-04-05 04:48:53.557259 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-05 04:48:53.557264 | orchestrator | Sunday 05 April 2026 04:48:48 +0000 (0:00:00.343) 0:03:24.821 ********** 2026-04-05 04:48:53.557270 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:48:53.557276 | orchestrator | 2026-04-05 04:48:53.557281 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-05 04:48:53.557287 | orchestrator | Sunday 05 April 2026 04:48:49 +0000 (0:00:01.441) 0:03:26.263 ********** 2026-04-05 04:48:53.557311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:53.557327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.557333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 04:48:53.557340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 04:48:53.557382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:53.673829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.673932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.673949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.673982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 04:48:53.673997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.674085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 04:48:53.674123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 04:48:53.674136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.674154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:53.674166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.674179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.674205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 04:48:53.772795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.772895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.772912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.772943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 04:48:53.772958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 04:48:53.773014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:53.773030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:48:53.773042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:53.773059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.773070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 04:48:53.773091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.773112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.908400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 04:48:53.908561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.908595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 04:48:53.908639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 04:48:53.908671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:53.908684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:53.908696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.908715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:53.908727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 04:48:53.908748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:53.908768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.302804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 04:48:55.302936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.302985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.303010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 04:48:55.303064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:55.303086 | orchestrator | 2026-04-05 04:48:55.303108 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-05 04:48:55.303127 | orchestrator | Sunday 05 April 2026 04:48:54 +0000 (0:00:04.657) 0:03:30.920 ********** 2026-04-05 04:48:55.303175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:55.303197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.303227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 04:48:55.303260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 04:48:55.303292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.540981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.541081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.541099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 04:48:55.541153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:55.541166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.541179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 04:48:55.541211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.541223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.541242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 04:48:55.541265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:55.541277 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:48:55.541291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:55.541311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.724663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 04:48:55.724817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 04:48:55.724839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.724854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.724868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.724900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 04:48:55.724913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:55.724939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.724952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 04:48:55.724964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.724975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.724995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 04:48:55.938315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:55.938489 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:48:55.938526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:48:55.938542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.938556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-05 04:48:55.938589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-05 04:48:55.938615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:48:55.938628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.938641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:48:55.938653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 04:48:55.938665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:48:55.938685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-05 04:49:06.433791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-05 04:49:06.433888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-05 04:49:06.433901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-05 04:49:06.433913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-05 04:49:06.433924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-05 04:49:06.433970 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:06.434001 | orchestrator | 2026-04-05 04:49:06.434011 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-05 04:49:06.434079 | orchestrator | Sunday 05 April 2026 04:48:56 +0000 (0:00:02.023) 0:03:32.943 ********** 2026-04-05 04:49:06.434089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:06.434114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:06.434125 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:06.434133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:06.434141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:06.434149 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:06.434162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:06.434171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:06.434179 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:06.434186 | orchestrator | 2026-04-05 04:49:06.434195 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-05 04:49:06.434202 | orchestrator | Sunday 05 April 2026 04:48:58 +0000 (0:00:01.877) 0:03:34.821 ********** 2026-04-05 04:49:06.434211 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:06.434220 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:06.434227 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:06.434235 | orchestrator | 2026-04-05 04:49:06.434243 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-05 04:49:06.434251 | orchestrator | Sunday 05 April 2026 04:48:59 +0000 (0:00:01.186) 0:03:36.008 ********** 2026-04-05 04:49:06.434258 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:06.434266 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:06.434274 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:06.434282 | orchestrator | 2026-04-05 04:49:06.434289 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-05 04:49:06.434297 | orchestrator | Sunday 05 April 2026 04:49:01 +0000 (0:00:02.290) 0:03:38.298 ********** 2026-04-05 04:49:06.434305 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:49:06.434313 | orchestrator | 2026-04-05 04:49:06.434321 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-05 04:49:06.434328 | orchestrator | Sunday 05 April 2026 04:49:02 +0000 (0:00:01.501) 0:03:39.799 ********** 2026-04-05 04:49:06.434338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 04:49:06.434363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 04:49:16.910275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 04:49:16.910380 | orchestrator | 2026-04-05 04:49:16.910389 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-05 04:49:16.910395 | orchestrator | Sunday 05 April 2026 04:49:06 +0000 (0:00:03.712) 0:03:43.512 ********** 2026-04-05 04:49:16.910400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 04:49:16.910455 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:16.910465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 04:49:16.910498 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:16.910516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 04:49:16.910521 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:16.910526 | orchestrator | 2026-04-05 04:49:16.910530 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-05 04:49:16.910539 | orchestrator | Sunday 05 April 2026 04:49:07 +0000 (0:00:00.552) 0:03:44.065 ********** 2026-04-05 04:49:16.910545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:49:16.910551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:49:16.910556 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:16.910560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:49:16.910564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:49:16.910568 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:16.910572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:49:16.910581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:49:16.910585 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:16.910589 | orchestrator | 2026-04-05 04:49:16.910593 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-05 04:49:16.910597 | orchestrator | Sunday 05 April 2026 04:49:08 +0000 (0:00:01.180) 0:03:45.245 ********** 2026-04-05 04:49:16.910601 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:16.910605 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:16.910609 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:16.910613 | orchestrator | 2026-04-05 04:49:16.910617 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-05 04:49:16.910621 | orchestrator | Sunday 05 April 2026 04:49:09 +0000 (0:00:01.250) 0:03:46.496 ********** 2026-04-05 04:49:16.910625 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:16.910629 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:16.910632 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:16.910637 | orchestrator | 2026-04-05 04:49:16.910641 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-05 04:49:16.910645 | orchestrator | Sunday 05 April 2026 04:49:11 +0000 (0:00:02.123) 0:03:48.619 ********** 2026-04-05 04:49:16.910649 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:49:16.910653 | orchestrator | 2026-04-05 04:49:16.910657 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-05 04:49:16.910660 | orchestrator | Sunday 05 April 2026 04:49:13 +0000 (0:00:01.557) 0:03:50.176 ********** 2026-04-05 04:49:16.910670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:49:19.151947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:49:19.152062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:49:19.152077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:49:19.152088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:49:19.152121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:49:19.152133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:49:19.152153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:49:19.152162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:49:19.152172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:49:19.152193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:49:20.346443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:49:20.346537 | orchestrator | 2026-04-05 04:49:20.346546 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-05 04:49:20.346552 | orchestrator | Sunday 05 April 2026 04:49:19 +0000 (0:00:05.907) 0:03:56.084 ********** 2026-04-05 04:49:20.346560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:49:20.346567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:49:20.346572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:49:20.346598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:49:20.346608 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:20.346615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:49:20.346620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:49:20.346625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:49:20.346630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:49:20.346635 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:20.346647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:49:33.321788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:49:33.321913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 04:49:33.321931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 04:49:33.321944 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:33.321957 | orchestrator | 2026-04-05 04:49:33.321969 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-05 04:49:33.321984 | orchestrator | Sunday 05 April 2026 04:49:20 +0000 (0:00:01.479) 0:03:57.564 ********** 2026-04-05 04:49:33.322004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322199 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:33.322210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322275 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:33.322286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:49:33.322346 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:33.322367 | orchestrator | 2026-04-05 04:49:33.322389 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-05 04:49:33.322452 | orchestrator | Sunday 05 April 2026 04:49:21 +0000 (0:00:01.140) 0:03:58.705 ********** 2026-04-05 04:49:33.322466 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:33.322479 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:33.322492 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:33.322504 | orchestrator | 2026-04-05 04:49:33.322517 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-05 04:49:33.322531 | orchestrator | Sunday 05 April 2026 04:49:23 +0000 (0:00:01.247) 0:03:59.952 ********** 2026-04-05 04:49:33.322542 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:33.322555 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:33.322568 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:33.322580 | orchestrator | 2026-04-05 04:49:33.322593 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-05 04:49:33.322606 | orchestrator | Sunday 05 April 2026 04:49:25 +0000 (0:00:02.265) 0:04:02.218 ********** 2026-04-05 04:49:33.322628 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:49:33.322641 | orchestrator | 2026-04-05 04:49:33.322654 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-05 04:49:33.322667 | orchestrator | Sunday 05 April 2026 04:49:27 +0000 (0:00:02.042) 0:04:04.260 ********** 2026-04-05 04:49:33.322678 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-05 04:49:33.322690 | orchestrator | 2026-04-05 04:49:33.322701 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-05 04:49:33.322712 | orchestrator | Sunday 05 April 2026 04:49:28 +0000 (0:00:01.428) 0:04:05.688 ********** 2026-04-05 04:49:33.322724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 04:49:33.322744 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 04:49:33.322774 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-05 04:49:46.333699 | orchestrator | 2026-04-05 04:49:46.333792 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-05 04:49:46.333804 | orchestrator | Sunday 05 April 2026 04:49:33 +0000 (0:00:04.545) 0:04:10.234 ********** 2026-04-05 04:49:46.333815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.333825 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:46.333834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.333841 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:46.333848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.333873 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:46.333881 | orchestrator | 2026-04-05 04:49:46.333888 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-05 04:49:46.333894 | orchestrator | Sunday 05 April 2026 04:49:34 +0000 (0:00:01.420) 0:04:11.654 ********** 2026-04-05 04:49:46.333902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 04:49:46.333912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 04:49:46.333920 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:46.333927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 04:49:46.333934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 04:49:46.333941 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:46.333960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 04:49:46.333967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-05 04:49:46.333974 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:46.333981 | orchestrator | 2026-04-05 04:49:46.333987 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 04:49:46.333994 | orchestrator | Sunday 05 April 2026 04:49:36 +0000 (0:00:02.046) 0:04:13.700 ********** 2026-04-05 04:49:46.334001 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:46.334008 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:46.334060 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:46.334068 | orchestrator | 2026-04-05 04:49:46.334074 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 04:49:46.334081 | orchestrator | Sunday 05 April 2026 04:49:39 +0000 (0:00:02.508) 0:04:16.209 ********** 2026-04-05 04:49:46.334087 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:49:46.334094 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:49:46.334114 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:49:46.334121 | orchestrator | 2026-04-05 04:49:46.334128 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-05 04:49:46.334135 | orchestrator | Sunday 05 April 2026 04:49:42 +0000 (0:00:03.233) 0:04:19.442 ********** 2026-04-05 04:49:46.334142 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-05 04:49:46.334150 | orchestrator | 2026-04-05 04:49:46.334156 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-05 04:49:46.334163 | orchestrator | Sunday 05 April 2026 04:49:43 +0000 (0:00:00.888) 0:04:20.331 ********** 2026-04-05 04:49:46.334177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.334185 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:46.334192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.334199 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:46.334205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.334212 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:49:46.334219 | orchestrator | 2026-04-05 04:49:46.334226 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-05 04:49:46.334233 | orchestrator | Sunday 05 April 2026 04:49:45 +0000 (0:00:01.520) 0:04:21.851 ********** 2026-04-05 04:49:46.334239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.334246 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:49:46.334257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:49:46.334266 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:49:46.334278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-05 04:50:10.662267 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:10.662407 | orchestrator | 2026-04-05 04:50:10.662447 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-05 04:50:10.662460 | orchestrator | Sunday 05 April 2026 04:49:46 +0000 (0:00:01.395) 0:04:23.247 ********** 2026-04-05 04:50:10.662471 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:10.662482 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:10.662493 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:10.662504 | orchestrator | 2026-04-05 04:50:10.662515 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 04:50:10.662526 | orchestrator | Sunday 05 April 2026 04:49:48 +0000 (0:00:01.675) 0:04:24.922 ********** 2026-04-05 04:50:10.662537 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:50:10.662548 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:50:10.662559 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:50:10.662569 | orchestrator | 2026-04-05 04:50:10.662580 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 04:50:10.662591 | orchestrator | Sunday 05 April 2026 04:49:51 +0000 (0:00:03.438) 0:04:28.361 ********** 2026-04-05 04:50:10.662601 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:50:10.662612 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:50:10.662622 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:50:10.662633 | orchestrator | 2026-04-05 04:50:10.662644 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-05 04:50:10.662654 | orchestrator | Sunday 05 April 2026 04:49:54 +0000 (0:00:03.104) 0:04:31.465 ********** 2026-04-05 04:50:10.662665 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-05 04:50:10.662677 | orchestrator | 2026-04-05 04:50:10.662688 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-05 04:50:10.662699 | orchestrator | Sunday 05 April 2026 04:49:55 +0000 (0:00:01.074) 0:04:32.540 ********** 2026-04-05 04:50:10.662712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 04:50:10.662726 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:10.662738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 04:50:10.662749 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:10.662761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 04:50:10.662793 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:10.662806 | orchestrator | 2026-04-05 04:50:10.662832 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-05 04:50:10.662847 | orchestrator | Sunday 05 April 2026 04:49:57 +0000 (0:00:01.532) 0:04:34.073 ********** 2026-04-05 04:50:10.662867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 04:50:10.662881 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:10.662911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 04:50:10.662926 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:10.662939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-05 04:50:10.662952 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:10.662965 | orchestrator | 2026-04-05 04:50:10.662977 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-05 04:50:10.662990 | orchestrator | Sunday 05 April 2026 04:49:58 +0000 (0:00:01.464) 0:04:35.538 ********** 2026-04-05 04:50:10.663003 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:10.663016 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:10.663029 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:10.663042 | orchestrator | 2026-04-05 04:50:10.663054 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-05 04:50:10.663067 | orchestrator | Sunday 05 April 2026 04:50:00 +0000 (0:00:01.563) 0:04:37.102 ********** 2026-04-05 04:50:10.663080 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:50:10.663092 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:50:10.663103 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:50:10.663114 | orchestrator | 2026-04-05 04:50:10.663125 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-05 04:50:10.663135 | orchestrator | Sunday 05 April 2026 04:50:02 +0000 (0:00:02.577) 0:04:39.679 ********** 2026-04-05 04:50:10.663146 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:50:10.663157 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:50:10.663167 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:50:10.663177 | orchestrator | 2026-04-05 04:50:10.663188 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-05 04:50:10.663199 | orchestrator | Sunday 05 April 2026 04:50:06 +0000 (0:00:03.373) 0:04:43.053 ********** 2026-04-05 04:50:10.663210 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:50:10.663221 | orchestrator | 2026-04-05 04:50:10.663231 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-05 04:50:10.663242 | orchestrator | Sunday 05 April 2026 04:50:07 +0000 (0:00:01.414) 0:04:44.467 ********** 2026-04-05 04:50:10.663254 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 04:50:10.663277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 04:50:10.663296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.114870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.114974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:50:11.114991 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 04:50:11.115044 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 04:50:11.115058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 04:50:11.115088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 04:50:11.115101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.115114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.115125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.115144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:50:11.115160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.115172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:50:11.115184 | orchestrator | 2026-04-05 04:50:11.115203 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-05 04:50:11.934959 | orchestrator | Sunday 05 April 2026 04:50:11 +0000 (0:00:03.464) 0:04:47.932 ********** 2026-04-05 04:50:11.935090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 04:50:11.935113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 04:50:11.935157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.935171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.935184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:50:11.935196 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:11.935232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 04:50:11.935245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 04:50:11.935257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.935276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 04:50:11.935442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:50:11.935467 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:11.935482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 04:50:11.935508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 04:50:23.590261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 04:50:23.590473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 04:50:23.590523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 04:50:23.590538 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:23.590551 | orchestrator | 2026-04-05 04:50:23.590564 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-05 04:50:23.590577 | orchestrator | Sunday 05 April 2026 04:50:12 +0000 (0:00:00.979) 0:04:48.911 ********** 2026-04-05 04:50:23.590588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 04:50:23.590620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 04:50:23.590633 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:23.590644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 04:50:23.590655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 04:50:23.590666 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:23.590677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 04:50:23.590688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-05 04:50:23.590699 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:23.590710 | orchestrator | 2026-04-05 04:50:23.590721 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-05 04:50:23.590731 | orchestrator | Sunday 05 April 2026 04:50:13 +0000 (0:00:00.961) 0:04:49.873 ********** 2026-04-05 04:50:23.590743 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:50:23.590755 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:50:23.590766 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:50:23.590776 | orchestrator | 2026-04-05 04:50:23.590787 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-05 04:50:23.590798 | orchestrator | Sunday 05 April 2026 04:50:14 +0000 (0:00:01.489) 0:04:51.362 ********** 2026-04-05 04:50:23.590808 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:50:23.590819 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:50:23.590856 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:50:23.590869 | orchestrator | 2026-04-05 04:50:23.590879 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-05 04:50:23.590890 | orchestrator | Sunday 05 April 2026 04:50:16 +0000 (0:00:02.149) 0:04:53.512 ********** 2026-04-05 04:50:23.590902 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:50:23.590913 | orchestrator | 2026-04-05 04:50:23.590924 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-05 04:50:23.590935 | orchestrator | Sunday 05 April 2026 04:50:18 +0000 (0:00:01.359) 0:04:54.872 ********** 2026-04-05 04:50:23.590949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:50:23.590963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:50:23.590981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:50:23.591002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:50:25.113809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:50:25.113995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:50:25.114091 | orchestrator | 2026-04-05 04:50:25.114107 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-05 04:50:25.114124 | orchestrator | Sunday 05 April 2026 04:50:24 +0000 (0:00:06.357) 0:05:01.230 ********** 2026-04-05 04:50:25.114142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:50:25.114231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:50:25.114246 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:25.114259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:50:25.114279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:50:25.114291 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:25.114303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:50:25.114334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:50:32.643730 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:32.643862 | orchestrator | 2026-04-05 04:50:32.643880 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-05 04:50:32.643894 | orchestrator | Sunday 05 April 2026 04:50:25 +0000 (0:00:00.819) 0:05:02.049 ********** 2026-04-05 04:50:32.643907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:32.643922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 04:50:32.643937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 04:50:32.643949 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:32.643961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:32.643990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 04:50:32.644002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 04:50:32.644013 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:32.644049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:32.644060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 04:50:32.644071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-05 04:50:32.644082 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:32.644093 | orchestrator | 2026-04-05 04:50:32.644104 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-05 04:50:32.644115 | orchestrator | Sunday 05 April 2026 04:50:26 +0000 (0:00:01.070) 0:05:03.120 ********** 2026-04-05 04:50:32.644126 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:32.644136 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:32.644147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:32.644157 | orchestrator | 2026-04-05 04:50:32.644168 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-05 04:50:32.644179 | orchestrator | Sunday 05 April 2026 04:50:27 +0000 (0:00:00.867) 0:05:03.988 ********** 2026-04-05 04:50:32.644189 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:32.644200 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:32.644210 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:32.644221 | orchestrator | 2026-04-05 04:50:32.644232 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-05 04:50:32.644244 | orchestrator | Sunday 05 April 2026 04:50:28 +0000 (0:00:01.427) 0:05:05.416 ********** 2026-04-05 04:50:32.644257 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:50:32.644271 | orchestrator | 2026-04-05 04:50:32.644283 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-05 04:50:32.644296 | orchestrator | Sunday 05 April 2026 04:50:30 +0000 (0:00:01.446) 0:05:06.862 ********** 2026-04-05 04:50:32.644333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 04:50:32.644383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:50:32.644413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:32.644427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:32.644441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:50:32.644464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 04:50:34.511307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:50:34.511483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 04:50:34.511525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:34.512381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:50:34.512404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:34.512416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:50:34.512448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:34.512459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:34.512469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:50:34.512498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:50:34.512511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 04:50:34.512523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:34.512541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.449455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.449606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:50:35.449625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:50:35.449638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 04:50:35.449667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 04:50:35.449688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.449703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.449712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.449724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.449741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.449758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.449768 | orchestrator | 2026-04-05 04:50:35.449779 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-05 04:50:35.449789 | orchestrator | Sunday 05 April 2026 04:50:34 +0000 (0:00:04.890) 0:05:11.753 ********** 2026-04-05 04:50:35.449807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 04:50:35.571951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:50:35.572057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.572074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.572087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.572102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 04:50:35.572156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:50:35.572177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:50:35.572190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 04:50:35.572202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.572213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.572224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.572243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.572268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.727122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.727213 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:35.727230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:50:35.727244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 04:50:35.727277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.727287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.727324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.727395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 04:50:35.727406 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:35.727417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 04:50:35.727427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.727443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:35.727452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 04:50:35.727474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:50:42.982618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-05 04:50:42.982731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:42.982748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 04:50:42.982793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 04:50:42.982806 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:42.982819 | orchestrator | 2026-04-05 04:50:42.982831 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-05 04:50:42.982844 | orchestrator | Sunday 05 April 2026 04:50:35 +0000 (0:00:00.944) 0:05:12.697 ********** 2026-04-05 04:50:42.982856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 04:50:42.982871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 04:50:42.982900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:42.982930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:42.982943 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:42.982954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 04:50:42.982965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 04:50:42.982976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:42.982988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:42.983007 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:42.983018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 04:50:42.983029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-05 04:50:42.983040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:42.983051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-05 04:50:42.983062 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:42.983073 | orchestrator | 2026-04-05 04:50:42.983084 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-05 04:50:42.983095 | orchestrator | Sunday 05 April 2026 04:50:37 +0000 (0:00:01.399) 0:05:14.097 ********** 2026-04-05 04:50:42.983105 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:42.983116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:42.983127 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:42.983137 | orchestrator | 2026-04-05 04:50:42.983151 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-05 04:50:42.983164 | orchestrator | Sunday 05 April 2026 04:50:37 +0000 (0:00:00.479) 0:05:14.577 ********** 2026-04-05 04:50:42.983176 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:42.983189 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:42.983201 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:42.983213 | orchestrator | 2026-04-05 04:50:42.983231 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-05 04:50:42.983244 | orchestrator | Sunday 05 April 2026 04:50:39 +0000 (0:00:01.457) 0:05:16.034 ********** 2026-04-05 04:50:42.983257 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:50:42.983269 | orchestrator | 2026-04-05 04:50:42.983283 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-05 04:50:42.983295 | orchestrator | Sunday 05 April 2026 04:50:40 +0000 (0:00:01.781) 0:05:17.816 ********** 2026-04-05 04:50:42.983317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 04:50:52.777450 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 04:50:52.777567 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 04:50:52.777586 | orchestrator | 2026-04-05 04:50:52.777600 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-05 04:50:52.777613 | orchestrator | Sunday 05 April 2026 04:50:43 +0000 (0:00:02.471) 0:05:20.288 ********** 2026-04-05 04:50:52.777644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 04:50:52.777676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 04:50:52.777713 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:52.777726 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:52.777738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 04:50:52.777750 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:52.777760 | orchestrator | 2026-04-05 04:50:52.777772 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-05 04:50:52.777783 | orchestrator | Sunday 05 April 2026 04:50:43 +0000 (0:00:00.472) 0:05:20.760 ********** 2026-04-05 04:50:52.777794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 04:50:52.777806 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:52.777817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 04:50:52.777828 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:52.777839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-05 04:50:52.777850 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:52.777860 | orchestrator | 2026-04-05 04:50:52.777871 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-05 04:50:52.777882 | orchestrator | Sunday 05 April 2026 04:50:44 +0000 (0:00:01.021) 0:05:21.782 ********** 2026-04-05 04:50:52.777893 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:52.777903 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:52.777914 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:52.777925 | orchestrator | 2026-04-05 04:50:52.777938 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-05 04:50:52.777952 | orchestrator | Sunday 05 April 2026 04:50:45 +0000 (0:00:00.511) 0:05:22.294 ********** 2026-04-05 04:50:52.777964 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:52.777977 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:50:52.777989 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:50:52.778001 | orchestrator | 2026-04-05 04:50:52.778064 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-05 04:50:52.778079 | orchestrator | Sunday 05 April 2026 04:50:46 +0000 (0:00:01.461) 0:05:23.755 ********** 2026-04-05 04:50:52.778102 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:50:52.778114 | orchestrator | 2026-04-05 04:50:52.778127 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-05 04:50:52.778140 | orchestrator | Sunday 05 April 2026 04:50:48 +0000 (0:00:01.795) 0:05:25.551 ********** 2026-04-05 04:50:52.778153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 04:50:52.778218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 04:50:55.983573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 04:50:55.983688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 04:50:55.983726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 04:50:55.983758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 04:50:55.983773 | orchestrator | 2026-04-05 04:50:55.983787 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-05 04:50:55.983798 | orchestrator | Sunday 05 April 2026 04:50:55 +0000 (0:00:06.824) 0:05:32.375 ********** 2026-04-05 04:50:55.983811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 04:50:55.983836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 04:50:55.983849 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:50:55.983861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 04:50:55.983882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 04:51:05.665811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 04:51:05.665913 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:51:05.665934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 04:51:05.665940 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:51:05.665945 | orchestrator | 2026-04-05 04:51:05.665951 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-05 04:51:05.665957 | orchestrator | Sunday 05 April 2026 04:50:56 +0000 (0:00:00.524) 0:05:32.900 ********** 2026-04-05 04:51:05.665963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 04:51:05.665971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 04:51:05.665978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:51:05.665984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:51:05.665989 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:51:05.665994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 04:51:05.665999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 04:51:05.666058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 04:51:05.666066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-05 04:51:05.666076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:51:05.666081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:51:05.666086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:51:05.666090 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:51:05.666099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-05 04:51:05.666104 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:51:05.666108 | orchestrator | 2026-04-05 04:51:05.666113 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-05 04:51:05.666118 | orchestrator | Sunday 05 April 2026 04:50:57 +0000 (0:00:01.239) 0:05:34.139 ********** 2026-04-05 04:51:05.666123 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:51:05.666128 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:51:05.666133 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:51:05.666138 | orchestrator | 2026-04-05 04:51:05.666143 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-05 04:51:05.666148 | orchestrator | Sunday 05 April 2026 04:50:58 +0000 (0:00:01.178) 0:05:35.318 ********** 2026-04-05 04:51:05.666152 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:51:05.666157 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:51:05.666162 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:51:05.666166 | orchestrator | 2026-04-05 04:51:05.666171 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-05 04:51:05.666176 | orchestrator | Sunday 05 April 2026 04:51:00 +0000 (0:00:02.082) 0:05:37.400 ********** 2026-04-05 04:51:05.666180 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:51:05.666185 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:51:05.666190 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:51:05.666195 | orchestrator | 2026-04-05 04:51:05.666199 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-05 04:51:05.666204 | orchestrator | Sunday 05 April 2026 04:51:00 +0000 (0:00:00.352) 0:05:37.752 ********** 2026-04-05 04:51:05.666209 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:51:05.666213 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:51:05.666218 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:51:05.666223 | orchestrator | 2026-04-05 04:51:05.666227 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-05 04:51:05.666232 | orchestrator | Sunday 05 April 2026 04:51:01 +0000 (0:00:00.683) 0:05:38.436 ********** 2026-04-05 04:51:05.666237 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:51:05.666242 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:51:05.666246 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:51:05.666251 | orchestrator | 2026-04-05 04:51:05.666255 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-05 04:51:05.666260 | orchestrator | Sunday 05 April 2026 04:51:01 +0000 (0:00:00.363) 0:05:38.799 ********** 2026-04-05 04:51:05.666265 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:51:05.666274 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:51:05.666279 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:51:05.666284 | orchestrator | 2026-04-05 04:51:05.666289 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-05 04:51:05.666293 | orchestrator | Sunday 05 April 2026 04:51:02 +0000 (0:00:00.329) 0:05:39.129 ********** 2026-04-05 04:51:05.666298 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:51:05.666303 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:51:05.666364 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:51:05.666370 | orchestrator | 2026-04-05 04:51:05.666376 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-05 04:51:05.666382 | orchestrator | Sunday 05 April 2026 04:51:02 +0000 (0:00:00.343) 0:05:39.473 ********** 2026-04-05 04:51:05.666388 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:51:05.666394 | orchestrator | 2026-04-05 04:51:05.666400 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-05 04:51:05.666405 | orchestrator | Sunday 05 April 2026 04:51:04 +0000 (0:00:01.888) 0:05:41.362 ********** 2026-04-05 04:51:05.666417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-05 04:51:09.452514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-05 04:51:09.452685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-05 04:51:09.452728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:51:09.452747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:51:09.452795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-05 04:51:09.452817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:51:09.452864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:51:09.452887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-05 04:51:09.452902 | orchestrator | 2026-04-05 04:51:09.452915 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-05 04:51:09.452935 | orchestrator | Sunday 05 April 2026 04:51:07 +0000 (0:00:03.405) 0:05:44.767 ********** 2026-04-05 04:51:09.452948 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 04:51:09.452960 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:51:09.452975 | orchestrator | } 2026-04-05 04:51:09.452999 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 04:51:09.453025 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:51:09.453042 | orchestrator | } 2026-04-05 04:51:09.453059 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 04:51:09.453078 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:51:09.453095 | orchestrator | } 2026-04-05 04:51:09.453113 | orchestrator | 2026-04-05 04:51:09.453133 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 04:51:09.453153 | orchestrator | Sunday 05 April 2026 04:51:08 +0000 (0:00:01.002) 0:05:45.769 ********** 2026-04-05 04:51:09.453174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-05 04:51:09.453212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:51:09.453233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:51:09.453248 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:51:09.453262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-05 04:51:09.453288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:52:55.724706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:52:55.724822 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:52:55.724848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-05 04:52:55.724899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-05 04:52:55.724920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-05 04:52:55.724939 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:52:55.724958 | orchestrator | 2026-04-05 04:52:55.724978 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-05 04:52:55.724997 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-05 04:52:55.725022 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-05 04:52:55.725060 | orchestrator | Sunday 05 April 2026 04:51:10 +0000 (0:00:01.857) 0:05:47.626 ********** 2026-04-05 04:52:55.725080 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:52:55.725100 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:52:55.725118 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:52:55.725136 | orchestrator | 2026-04-05 04:52:55.725155 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-05 04:52:55.725175 | orchestrator | Sunday 05 April 2026 04:51:11 +0000 (0:00:00.752) 0:05:48.379 ********** 2026-04-05 04:52:55.725193 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:52:55.725213 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:52:55.725266 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:52:55.725287 | orchestrator | 2026-04-05 04:52:55.725301 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-05 04:52:55.725314 | orchestrator | Sunday 05 April 2026 04:51:11 +0000 (0:00:00.395) 0:05:48.775 ********** 2026-04-05 04:52:55.725327 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:52:55.725340 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:52:55.725353 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:52:55.725365 | orchestrator | 2026-04-05 04:52:55.725376 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-05 04:52:55.725387 | orchestrator | Sunday 05 April 2026 04:51:18 +0000 (0:00:06.664) 0:05:55.439 ********** 2026-04-05 04:52:55.725398 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:52:55.725408 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:52:55.725419 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:52:55.725430 | orchestrator | 2026-04-05 04:52:55.725441 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-05 04:52:55.725451 | orchestrator | Sunday 05 April 2026 04:51:24 +0000 (0:00:06.041) 0:06:01.481 ********** 2026-04-05 04:52:55.725474 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:52:55.725485 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:52:55.725496 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:52:55.725507 | orchestrator | 2026-04-05 04:52:55.725587 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-05 04:52:55.725602 | orchestrator | Sunday 05 April 2026 04:51:30 +0000 (0:00:06.056) 0:06:07.538 ********** 2026-04-05 04:52:55.725613 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:52:55.725624 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:52:55.725635 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:52:55.725646 | orchestrator | 2026-04-05 04:52:55.725657 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-05 04:52:55.725677 | orchestrator | Sunday 05 April 2026 04:51:37 +0000 (0:00:06.868) 0:06:14.406 ********** 2026-04-05 04:52:55.725688 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:52:55.725699 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:52:55.725709 | orchestrator | 2026-04-05 04:52:55.725720 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-05 04:52:55.725731 | orchestrator | Sunday 05 April 2026 04:51:41 +0000 (0:00:04.064) 0:06:18.470 ********** 2026-04-05 04:52:55.725742 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:52:55.725752 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:52:55.725763 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:52:55.725774 | orchestrator | 2026-04-05 04:52:55.725785 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-05 04:52:55.725795 | orchestrator | Sunday 05 April 2026 04:51:54 +0000 (0:00:13.061) 0:06:31.532 ********** 2026-04-05 04:52:55.725806 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:52:55.725817 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:52:55.725828 | orchestrator | 2026-04-05 04:52:55.725838 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-05 04:52:55.725849 | orchestrator | Sunday 05 April 2026 04:51:58 +0000 (0:00:03.713) 0:06:35.246 ********** 2026-04-05 04:52:55.725860 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:52:55.725871 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:52:55.725881 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:52:55.725892 | orchestrator | 2026-04-05 04:52:55.725903 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-05 04:52:55.725913 | orchestrator | Sunday 05 April 2026 04:52:04 +0000 (0:00:06.270) 0:06:41.516 ********** 2026-04-05 04:52:55.725924 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:52:55.725935 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:52:55.725946 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:52:55.725956 | orchestrator | 2026-04-05 04:52:55.725967 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-05 04:52:55.725978 | orchestrator | Sunday 05 April 2026 04:52:10 +0000 (0:00:05.864) 0:06:47.381 ********** 2026-04-05 04:52:55.725988 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:52:55.725999 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:52:55.726010 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:52:55.726083 | orchestrator | 2026-04-05 04:52:55.726095 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-05 04:52:55.726105 | orchestrator | Sunday 05 April 2026 04:52:16 +0000 (0:00:05.878) 0:06:53.260 ********** 2026-04-05 04:52:55.726116 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:52:55.726127 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:52:55.726138 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:52:55.726149 | orchestrator | 2026-04-05 04:52:55.726159 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-05 04:52:55.726170 | orchestrator | Sunday 05 April 2026 04:52:22 +0000 (0:00:05.805) 0:06:59.065 ********** 2026-04-05 04:52:55.726181 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:52:55.726192 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:52:55.726203 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:52:55.726255 | orchestrator | 2026-04-05 04:52:55.726267 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-04-05 04:52:55.726278 | orchestrator | Sunday 05 April 2026 04:52:28 +0000 (0:00:06.078) 0:07:05.144 ********** 2026-04-05 04:52:55.726289 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:52:55.726300 | orchestrator | 2026-04-05 04:52:55.726310 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-05 04:52:55.726321 | orchestrator | Sunday 05 April 2026 04:52:31 +0000 (0:00:03.320) 0:07:08.465 ********** 2026-04-05 04:52:55.726332 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:52:55.726343 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:52:55.726354 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:52:55.726365 | orchestrator | 2026-04-05 04:52:55.726376 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-04-05 04:52:55.726386 | orchestrator | Sunday 05 April 2026 04:52:43 +0000 (0:00:11.452) 0:07:19.917 ********** 2026-04-05 04:52:55.726397 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:52:55.726408 | orchestrator | 2026-04-05 04:52:55.726419 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-05 04:52:55.726430 | orchestrator | Sunday 05 April 2026 04:52:47 +0000 (0:00:04.595) 0:07:24.513 ********** 2026-04-05 04:52:55.726440 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:52:55.726451 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:52:55.726462 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:52:55.726473 | orchestrator | 2026-04-05 04:52:55.726484 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-05 04:52:55.726494 | orchestrator | Sunday 05 April 2026 04:52:53 +0000 (0:00:05.690) 0:07:30.203 ********** 2026-04-05 04:52:55.726505 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:52:55.726516 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:52:55.726527 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:52:55.726537 | orchestrator | 2026-04-05 04:52:55.726548 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-05 04:52:55.726559 | orchestrator | Sunday 05 April 2026 04:52:54 +0000 (0:00:00.958) 0:07:31.162 ********** 2026-04-05 04:52:55.726569 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:52:55.726580 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:52:55.726591 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:52:55.726601 | orchestrator | 2026-04-05 04:52:55.726612 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:52:55.726625 | orchestrator | testbed-node-0 : ok=129  changed=30  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-05 04:52:55.726646 | orchestrator | testbed-node-1 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-05 04:52:57.203379 | orchestrator | testbed-node-2 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-05 04:52:57.203458 | orchestrator | 2026-04-05 04:52:57.203467 | orchestrator | 2026-04-05 04:52:57.203474 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:52:57.203496 | orchestrator | Sunday 05 April 2026 04:52:56 +0000 (0:00:02.138) 0:07:33.301 ********** 2026-04-05 04:52:57.203503 | orchestrator | =============================================================================== 2026-04-05 04:52:57.203509 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.06s 2026-04-05 04:52:57.203516 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 11.45s 2026-04-05 04:52:57.203522 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 6.87s 2026-04-05 04:52:57.203529 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.82s 2026-04-05 04:52:57.203535 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 6.66s 2026-04-05 04:52:57.203541 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.36s 2026-04-05 04:52:57.203564 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.27s 2026-04-05 04:52:57.203570 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 6.08s 2026-04-05 04:52:57.203576 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 6.06s 2026-04-05 04:52:57.203582 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 6.04s 2026-04-05 04:52:57.203588 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.91s 2026-04-05 04:52:57.203594 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 5.88s 2026-04-05 04:52:57.203600 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 5.86s 2026-04-05 04:52:57.203606 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 5.81s 2026-04-05 04:52:57.203612 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 5.69s 2026-04-05 04:52:57.203619 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.89s 2026-04-05 04:52:57.203625 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.66s 2026-04-05 04:52:57.203631 | orchestrator | loadbalancer : Wait for master proxysql to start ------------------------ 4.60s 2026-04-05 04:52:57.203637 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.55s 2026-04-05 04:52:57.203643 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.51s 2026-04-05 04:52:57.382383 | orchestrator | + osism apply -a upgrade opensearch 2026-04-05 04:52:58.643646 | orchestrator | 2026-04-05 04:52:58 | INFO  | Prepare task for execution of opensearch. 2026-04-05 04:52:58.712685 | orchestrator | 2026-04-05 04:52:58 | INFO  | Task 8895e6b1-560d-4bad-a562-2a14bc9260c7 (opensearch) was prepared for execution. 2026-04-05 04:52:58.712775 | orchestrator | 2026-04-05 04:52:58 | INFO  | It takes a moment until task 8895e6b1-560d-4bad-a562-2a14bc9260c7 (opensearch) has been started and output is visible here. 2026-04-05 04:53:10.204045 | orchestrator | 2026-04-05 04:53:10.204164 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:53:10.204182 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 04:53:10.204196 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 04:53:10.204290 | orchestrator | 2026-04-05 04:53:10.204304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:53:10.204315 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 04:53:10.204326 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 04:53:10.204347 | orchestrator | Sunday 05 April 2026 04:53:03 +0000 (0:00:01.086) 0:00:01.086 ********** 2026-04-05 04:53:10.204358 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:53:10.204370 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:53:10.204381 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:53:10.204392 | orchestrator | 2026-04-05 04:53:10.204403 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:53:10.204413 | orchestrator | Sunday 05 April 2026 04:53:03 +0000 (0:00:00.721) 0:00:01.808 ********** 2026-04-05 04:53:10.204424 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-05 04:53:10.204435 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-05 04:53:10.204446 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-05 04:53:10.204456 | orchestrator | 2026-04-05 04:53:10.204467 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-05 04:53:10.204478 | orchestrator | 2026-04-05 04:53:10.204488 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 04:53:10.204519 | orchestrator | Sunday 05 April 2026 04:53:04 +0000 (0:00:00.805) 0:00:02.614 ********** 2026-04-05 04:53:10.204531 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:53:10.204541 | orchestrator | 2026-04-05 04:53:10.204552 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-05 04:53:10.204563 | orchestrator | Sunday 05 April 2026 04:53:06 +0000 (0:00:01.405) 0:00:04.020 ********** 2026-04-05 04:53:10.204573 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 04:53:10.204587 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 04:53:10.204613 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-05 04:53:10.204626 | orchestrator | 2026-04-05 04:53:10.204638 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-05 04:53:10.204651 | orchestrator | Sunday 05 April 2026 04:53:08 +0000 (0:00:02.627) 0:00:06.647 ********** 2026-04-05 04:53:10.204667 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:10.204684 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:10.204718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:10.204735 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:10.204764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:10.204788 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:14.073141 | orchestrator | 2026-04-05 04:53:14.073348 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 04:53:14.073381 | orchestrator | Sunday 05 April 2026 04:53:10 +0000 (0:00:01.610) 0:00:08.258 ********** 2026-04-05 04:53:14.073402 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:53:14.073422 | orchestrator | 2026-04-05 04:53:14.073442 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-05 04:53:14.073462 | orchestrator | Sunday 05 April 2026 04:53:11 +0000 (0:00:01.078) 0:00:09.336 ********** 2026-04-05 04:53:14.073515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:14.073547 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:14.073560 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:14.073599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:14.073614 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:14.073640 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:14.073655 | orchestrator | 2026-04-05 04:53:14.073668 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-05 04:53:14.073682 | orchestrator | Sunday 05 April 2026 04:53:13 +0000 (0:00:02.218) 0:00:11.554 ********** 2026-04-05 04:53:14.073703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:53:14.073738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:53:15.432918 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:53:15.433031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:53:15.433067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:53:15.433084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:53:15.433097 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:53:15.433128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:53:15.433161 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:53:15.433173 | orchestrator | 2026-04-05 04:53:15.433185 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-05 04:53:15.433198 | orchestrator | Sunday 05 April 2026 04:53:14 +0000 (0:00:00.993) 0:00:12.547 ********** 2026-04-05 04:53:15.433260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:53:15.433275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:53:15.433287 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:53:15.433299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:53:15.433328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:53:18.146554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:53:18.146652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:53:18.146672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:53:18.146686 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:53:18.146698 | orchestrator | 2026-04-05 04:53:18.146710 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-05 04:53:18.146753 | orchestrator | Sunday 05 April 2026 04:53:15 +0000 (0:00:01.254) 0:00:13.802 ********** 2026-04-05 04:53:18.146765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:18.146795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:18.146813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:18.146826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:18.146846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:18.146868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:25.814635 | orchestrator | 2026-04-05 04:53:25.814782 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-05 04:53:25.814803 | orchestrator | Sunday 05 April 2026 04:53:18 +0000 (0:00:02.404) 0:00:16.207 ********** 2026-04-05 04:53:25.814816 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:53:25.814828 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:53:25.814838 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:53:25.814884 | orchestrator | 2026-04-05 04:53:25.814895 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-05 04:53:25.814906 | orchestrator | Sunday 05 April 2026 04:53:20 +0000 (0:00:02.404) 0:00:18.611 ********** 2026-04-05 04:53:25.814917 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:53:25.814928 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:53:25.814939 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:53:25.814951 | orchestrator | 2026-04-05 04:53:25.814961 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-05 04:53:25.814972 | orchestrator | Sunday 05 April 2026 04:53:22 +0000 (0:00:02.140) 0:00:20.752 ********** 2026-04-05 04:53:25.814988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:25.815027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:25.815040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 04:53:25.815083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:25.815116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:25.815151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-05 04:53:25.815166 | orchestrator | 2026-04-05 04:53:25.815179 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-05 04:53:25.815193 | orchestrator | Sunday 05 April 2026 04:53:25 +0000 (0:00:02.300) 0:00:23.052 ********** 2026-04-05 04:53:25.815206 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 04:53:25.815268 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:53:25.815289 | orchestrator | } 2026-04-05 04:53:25.815308 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 04:53:25.815325 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:53:25.815338 | orchestrator | } 2026-04-05 04:53:25.815350 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 04:53:25.815363 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:53:25.815376 | orchestrator | } 2026-04-05 04:53:25.815389 | orchestrator | 2026-04-05 04:53:25.815402 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 04:53:25.815415 | orchestrator | Sunday 05 April 2026 04:53:25 +0000 (0:00:00.388) 0:00:23.440 ********** 2026-04-05 04:53:25.815446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:56:36.542837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:56:36.542984 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:56:36.543004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:56:36.543017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:56:36.543029 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:56:36.543074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 04:56:36.543097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-05 04:56:36.543109 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:56:36.543120 | orchestrator | 2026-04-05 04:56:36.543152 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 04:56:36.543165 | orchestrator | Sunday 05 April 2026 04:53:27 +0000 (0:00:01.667) 0:00:25.107 ********** 2026-04-05 04:56:36.543176 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:56:36.543186 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:56:36.543198 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:56:36.543209 | orchestrator | 2026-04-05 04:56:36.543220 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 04:56:36.543231 | orchestrator | Sunday 05 April 2026 04:53:27 +0000 (0:00:00.315) 0:00:25.423 ********** 2026-04-05 04:56:36.543241 | orchestrator | 2026-04-05 04:56:36.543251 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 04:56:36.543262 | orchestrator | Sunday 05 April 2026 04:53:27 +0000 (0:00:00.073) 0:00:25.496 ********** 2026-04-05 04:56:36.543273 | orchestrator | 2026-04-05 04:56:36.543284 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-05 04:56:36.543294 | orchestrator | Sunday 05 April 2026 04:53:27 +0000 (0:00:00.072) 0:00:25.569 ********** 2026-04-05 04:56:36.543305 | orchestrator | 2026-04-05 04:56:36.543316 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-05 04:56:36.543326 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-05 04:56:36.543338 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-05 04:56:36.543360 | orchestrator | Sunday 05 April 2026 04:53:27 +0000 (0:00:00.071) 0:00:25.640 ********** 2026-04-05 04:56:36.543374 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:56:36.543387 | orchestrator | 2026-04-05 04:56:36.543399 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-05 04:56:36.543413 | orchestrator | Sunday 05 April 2026 04:53:30 +0000 (0:00:02.469) 0:00:28.110 ********** 2026-04-05 04:56:36.543426 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:56:36.543438 | orchestrator | 2026-04-05 04:56:36.543451 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-05 04:56:36.543464 | orchestrator | Sunday 05 April 2026 04:53:36 +0000 (0:00:06.783) 0:00:34.894 ********** 2026-04-05 04:56:36.543477 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:56:36.543489 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:56:36.543502 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:56:36.543544 | orchestrator | 2026-04-05 04:56:36.543564 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-05 04:56:36.543595 | orchestrator | Sunday 05 April 2026 04:54:54 +0000 (0:01:17.829) 0:01:52.723 ********** 2026-04-05 04:56:36.543612 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:56:36.543625 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:56:36.543639 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:56:36.543651 | orchestrator | 2026-04-05 04:56:36.543664 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-05 04:56:36.543677 | orchestrator | Sunday 05 April 2026 04:56:31 +0000 (0:01:36.347) 0:03:29.071 ********** 2026-04-05 04:56:36.543690 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:56:36.543703 | orchestrator | 2026-04-05 04:56:36.543716 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-05 04:56:36.543727 | orchestrator | Sunday 05 April 2026 04:56:32 +0000 (0:00:01.087) 0:03:30.159 ********** 2026-04-05 04:56:36.543738 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:56:36.543749 | orchestrator | 2026-04-05 04:56:36.543760 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-05 04:56:36.543770 | orchestrator | Sunday 05 April 2026 04:56:34 +0000 (0:00:02.225) 0:03:32.384 ********** 2026-04-05 04:56:36.543781 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:56:36.543792 | orchestrator | 2026-04-05 04:56:36.543810 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-05 04:56:40.623244 | orchestrator | Sunday 05 April 2026 04:56:36 +0000 (0:00:02.114) 0:03:34.498 ********** 2026-04-05 04:56:40.623333 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:56:40.623347 | orchestrator | 2026-04-05 04:56:40.623357 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-05 04:56:40.623366 | orchestrator | Sunday 05 April 2026 04:56:38 +0000 (0:00:02.274) 0:03:36.773 ********** 2026-04-05 04:56:40.623376 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:56:40.623386 | orchestrator | 2026-04-05 04:56:40.623395 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-05 04:56:40.623404 | orchestrator | Sunday 05 April 2026 04:56:39 +0000 (0:00:00.256) 0:03:37.029 ********** 2026-04-05 04:56:40.623413 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:56:40.623422 | orchestrator | 2026-04-05 04:56:40.623430 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:56:40.623440 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 04:56:40.623450 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 04:56:40.623459 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 04:56:40.623468 | orchestrator | 2026-04-05 04:56:40.623476 | orchestrator | 2026-04-05 04:56:40.623485 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:56:40.623494 | orchestrator | Sunday 05 April 2026 04:56:40 +0000 (0:00:01.148) 0:03:38.178 ********** 2026-04-05 04:56:40.623606 | orchestrator | =============================================================================== 2026-04-05 04:56:40.623619 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 96.35s 2026-04-05 04:56:40.623628 | orchestrator | opensearch : Restart opensearch container ------------------------------ 77.83s 2026-04-05 04:56:40.623637 | orchestrator | opensearch : Perform a flush -------------------------------------------- 6.78s 2026-04-05 04:56:40.623645 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.62s 2026-04-05 04:56:40.623654 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 2.47s 2026-04-05 04:56:40.623662 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.41s 2026-04-05 04:56:40.623671 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.40s 2026-04-05 04:56:40.623699 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.30s 2026-04-05 04:56:40.623708 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.27s 2026-04-05 04:56:40.623717 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.23s 2026-04-05 04:56:40.623725 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.22s 2026-04-05 04:56:40.623734 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.14s 2026-04-05 04:56:40.623742 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.11s 2026-04-05 04:56:40.623751 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.67s 2026-04-05 04:56:40.623759 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.61s 2026-04-05 04:56:40.623767 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.41s 2026-04-05 04:56:40.623776 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.25s 2026-04-05 04:56:40.623785 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.15s 2026-04-05 04:56:40.623807 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.09s 2026-04-05 04:56:40.623828 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.08s 2026-04-05 04:56:40.813940 | orchestrator | + osism apply -a upgrade memcached 2026-04-05 04:56:42.097014 | orchestrator | 2026-04-05 04:56:42 | INFO  | Prepare task for execution of memcached. 2026-04-05 04:56:42.161159 | orchestrator | 2026-04-05 04:56:42 | INFO  | Task 8086c795-ebac-405b-aef8-64d1e0e0ac73 (memcached) was prepared for execution. 2026-04-05 04:56:42.161254 | orchestrator | 2026-04-05 04:56:42 | INFO  | It takes a moment until task 8086c795-ebac-405b-aef8-64d1e0e0ac73 (memcached) has been started and output is visible here. 2026-04-05 04:57:16.535113 | orchestrator | 2026-04-05 04:57:16.535236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:57:16.535256 | orchestrator | 2026-04-05 04:57:16.535271 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:57:16.535285 | orchestrator | Sunday 05 April 2026 04:56:47 +0000 (0:00:02.015) 0:00:02.015 ********** 2026-04-05 04:57:16.535298 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:57:16.535311 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:57:16.535323 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:57:16.535336 | orchestrator | 2026-04-05 04:57:16.535365 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:57:16.535379 | orchestrator | Sunday 05 April 2026 04:56:49 +0000 (0:00:01.719) 0:00:03.735 ********** 2026-04-05 04:57:16.535393 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-05 04:57:16.535405 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-05 04:57:16.535418 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-05 04:57:16.535427 | orchestrator | 2026-04-05 04:57:16.535437 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-05 04:57:16.535450 | orchestrator | 2026-04-05 04:57:16.535462 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-05 04:57:16.535474 | orchestrator | Sunday 05 April 2026 04:56:51 +0000 (0:00:01.790) 0:00:05.525 ********** 2026-04-05 04:57:16.535487 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:57:16.535499 | orchestrator | 2026-04-05 04:57:16.535559 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-05 04:57:16.535573 | orchestrator | Sunday 05 April 2026 04:56:55 +0000 (0:00:04.141) 0:00:09.666 ********** 2026-04-05 04:57:16.535588 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-05 04:57:16.535601 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-05 04:57:16.535633 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-05 04:57:16.535640 | orchestrator | 2026-04-05 04:57:16.535648 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-05 04:57:16.535655 | orchestrator | Sunday 05 April 2026 04:56:57 +0000 (0:00:02.132) 0:00:11.799 ********** 2026-04-05 04:57:16.535662 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-05 04:57:16.535670 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-05 04:57:16.535677 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-05 04:57:16.535684 | orchestrator | 2026-04-05 04:57:16.535691 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-05 04:57:16.535698 | orchestrator | Sunday 05 April 2026 04:57:00 +0000 (0:00:02.852) 0:00:14.652 ********** 2026-04-05 04:57:16.535709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 04:57:16.535721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 04:57:16.535756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-05 04:57:16.535770 | orchestrator | 2026-04-05 04:57:16.535783 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-05 04:57:16.535795 | orchestrator | Sunday 05 April 2026 04:57:02 +0000 (0:00:02.385) 0:00:17.038 ********** 2026-04-05 04:57:16.535808 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 04:57:16.535821 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:57:16.535872 | orchestrator | } 2026-04-05 04:57:16.535894 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 04:57:16.535909 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:57:16.535917 | orchestrator | } 2026-04-05 04:57:16.535924 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 04:57:16.535931 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:57:16.535938 | orchestrator | } 2026-04-05 04:57:16.535946 | orchestrator | 2026-04-05 04:57:16.535953 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 04:57:16.535967 | orchestrator | Sunday 05 April 2026 04:57:03 +0000 (0:00:01.392) 0:00:18.430 ********** 2026-04-05 04:57:16.535975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 04:57:16.535983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 04:57:16.535991 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:57:16.535998 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:57:16.536006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-05 04:57:16.536013 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:57:16.536020 | orchestrator | 2026-04-05 04:57:16.536027 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-05 04:57:16.536034 | orchestrator | Sunday 05 April 2026 04:57:05 +0000 (0:00:01.950) 0:00:20.381 ********** 2026-04-05 04:57:16.536042 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:57:16.536049 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:57:16.536056 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:57:16.536063 | orchestrator | 2026-04-05 04:57:16.536070 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:57:16.536078 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 04:57:16.536087 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 04:57:16.536094 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 04:57:16.536101 | orchestrator | 2026-04-05 04:57:16.536109 | orchestrator | 2026-04-05 04:57:16.536116 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:57:16.536134 | orchestrator | Sunday 05 April 2026 04:57:16 +0000 (0:00:10.607) 0:00:30.988 ********** 2026-04-05 04:57:16.880430 | orchestrator | =============================================================================== 2026-04-05 04:57:16.880570 | orchestrator | memcached : Restart memcached container -------------------------------- 10.61s 2026-04-05 04:57:16.880592 | orchestrator | memcached : include_tasks ----------------------------------------------- 4.14s 2026-04-05 04:57:16.880609 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.85s 2026-04-05 04:57:16.880647 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.39s 2026-04-05 04:57:16.880665 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.13s 2026-04-05 04:57:16.880681 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.95s 2026-04-05 04:57:16.880697 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.79s 2026-04-05 04:57:16.880713 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.72s 2026-04-05 04:57:16.880730 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.39s 2026-04-05 04:57:17.075828 | orchestrator | + osism apply -a upgrade redis 2026-04-05 04:57:18.340605 | orchestrator | 2026-04-05 04:57:18 | INFO  | Prepare task for execution of redis. 2026-04-05 04:57:18.406091 | orchestrator | 2026-04-05 04:57:18 | INFO  | Task 636fc1f1-f426-4e80-a878-3dfa197bbe0b (redis) was prepared for execution. 2026-04-05 04:57:18.406178 | orchestrator | 2026-04-05 04:57:18 | INFO  | It takes a moment until task 636fc1f1-f426-4e80-a878-3dfa197bbe0b (redis) has been started and output is visible here. 2026-04-05 04:57:35.245279 | orchestrator | 2026-04-05 04:57:35.245428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:57:35.245475 | orchestrator | 2026-04-05 04:57:35.245490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:57:35.245502 | orchestrator | Sunday 05 April 2026 04:57:23 +0000 (0:00:01.501) 0:00:01.501 ********** 2026-04-05 04:57:35.245543 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:57:35.245556 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:57:35.245566 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:57:35.245577 | orchestrator | 2026-04-05 04:57:35.245588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:57:35.245599 | orchestrator | Sunday 05 April 2026 04:57:25 +0000 (0:00:01.784) 0:00:03.285 ********** 2026-04-05 04:57:35.245610 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-05 04:57:35.245622 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-05 04:57:35.245633 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-05 04:57:35.245644 | orchestrator | 2026-04-05 04:57:35.245655 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-05 04:57:35.245665 | orchestrator | 2026-04-05 04:57:35.245676 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-05 04:57:35.245687 | orchestrator | Sunday 05 April 2026 04:57:26 +0000 (0:00:01.942) 0:00:05.228 ********** 2026-04-05 04:57:35.245698 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:57:35.245710 | orchestrator | 2026-04-05 04:57:35.245720 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-05 04:57:35.245731 | orchestrator | Sunday 05 April 2026 04:57:30 +0000 (0:00:03.857) 0:00:09.086 ********** 2026-04-05 04:57:35.245745 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.245791 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.245807 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.245837 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.245873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.245888 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.245920 | orchestrator | 2026-04-05 04:57:35.245946 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-05 04:57:35.245959 | orchestrator | Sunday 05 April 2026 04:57:33 +0000 (0:00:02.409) 0:00:11.495 ********** 2026-04-05 04:57:35.245985 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.246007 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.246082 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.246103 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:35.246126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182624 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182739 | orchestrator | 2026-04-05 04:57:43.182758 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-05 04:57:43.182772 | orchestrator | Sunday 05 April 2026 04:57:37 +0000 (0:00:03.933) 0:00:15.428 ********** 2026-04-05 04:57:43.182785 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182824 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182837 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182874 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182904 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182916 | orchestrator | 2026-04-05 04:57:43.182928 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-05 04:57:43.182939 | orchestrator | Sunday 05 April 2026 04:57:41 +0000 (0:00:04.102) 0:00:19.530 ********** 2026-04-05 04:57:43.182958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.182982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.183000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.183014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:57:43.183033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-05 04:58:11.900717 | orchestrator | 2026-04-05 04:58:11.900831 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-05 04:58:11.900848 | orchestrator | Sunday 05 April 2026 04:57:44 +0000 (0:00:03.037) 0:00:22.568 ********** 2026-04-05 04:58:11.900862 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 04:58:11.900875 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:58:11.900886 | orchestrator | } 2026-04-05 04:58:11.900898 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 04:58:11.900909 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:58:11.900919 | orchestrator | } 2026-04-05 04:58:11.900931 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 04:58:11.900941 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:58:11.900952 | orchestrator | } 2026-04-05 04:58:11.900963 | orchestrator | 2026-04-05 04:58:11.900974 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 04:58:11.900985 | orchestrator | Sunday 05 April 2026 04:57:45 +0000 (0:00:01.505) 0:00:24.073 ********** 2026-04-05 04:58:11.900998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 04:58:11.901012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 04:58:11.901025 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:58:11.901038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 04:58:11.901065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 04:58:11.901077 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:58:11.901088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-05 04:58:11.901142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-05 04:58:11.901156 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:58:11.901167 | orchestrator | 2026-04-05 04:58:11.901178 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 04:58:11.901188 | orchestrator | Sunday 05 April 2026 04:57:47 +0000 (0:00:02.035) 0:00:26.109 ********** 2026-04-05 04:58:11.901199 | orchestrator | 2026-04-05 04:58:11.901210 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 04:58:11.901220 | orchestrator | Sunday 05 April 2026 04:57:48 +0000 (0:00:00.429) 0:00:26.538 ********** 2026-04-05 04:58:11.901231 | orchestrator | 2026-04-05 04:58:11.901242 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-05 04:58:11.901253 | orchestrator | Sunday 05 April 2026 04:57:48 +0000 (0:00:00.413) 0:00:26.952 ********** 2026-04-05 04:58:11.901267 | orchestrator | 2026-04-05 04:58:11.901279 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-05 04:58:11.901291 | orchestrator | Sunday 05 April 2026 04:57:49 +0000 (0:00:00.777) 0:00:27.730 ********** 2026-04-05 04:58:11.901304 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:58:11.901316 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:58:11.901328 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:58:11.901340 | orchestrator | 2026-04-05 04:58:11.901353 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-05 04:58:11.901365 | orchestrator | Sunday 05 April 2026 04:58:00 +0000 (0:00:10.747) 0:00:38.477 ********** 2026-04-05 04:58:11.901377 | orchestrator | changed: [testbed-node-0] 2026-04-05 04:58:11.901390 | orchestrator | changed: [testbed-node-1] 2026-04-05 04:58:11.901402 | orchestrator | changed: [testbed-node-2] 2026-04-05 04:58:11.901414 | orchestrator | 2026-04-05 04:58:11.901427 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 04:58:11.901440 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 04:58:11.901454 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 04:58:11.901467 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 04:58:11.901479 | orchestrator | 2026-04-05 04:58:11.901492 | orchestrator | 2026-04-05 04:58:11.901505 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 04:58:11.901554 | orchestrator | Sunday 05 April 2026 04:58:11 +0000 (0:00:11.382) 0:00:49.860 ********** 2026-04-05 04:58:11.901567 | orchestrator | =============================================================================== 2026-04-05 04:58:11.901580 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.38s 2026-04-05 04:58:11.901601 | orchestrator | redis : Restart redis container ---------------------------------------- 10.75s 2026-04-05 04:58:11.901620 | orchestrator | redis : Copying over redis config files --------------------------------- 4.10s 2026-04-05 04:58:11.901632 | orchestrator | redis : Copying over default config.json files -------------------------- 3.93s 2026-04-05 04:58:11.901642 | orchestrator | redis : include_tasks --------------------------------------------------- 3.86s 2026-04-05 04:58:11.901653 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.04s 2026-04-05 04:58:11.901663 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.41s 2026-04-05 04:58:11.901674 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.03s 2026-04-05 04:58:11.901684 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.94s 2026-04-05 04:58:11.901695 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.78s 2026-04-05 04:58:11.901705 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.62s 2026-04-05 04:58:11.901716 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.51s 2026-04-05 04:58:12.092456 | orchestrator | + osism apply -a upgrade mariadb 2026-04-05 04:58:13.386801 | orchestrator | 2026-04-05 04:58:13 | INFO  | Prepare task for execution of mariadb. 2026-04-05 04:58:13.454872 | orchestrator | 2026-04-05 04:58:13 | INFO  | Task cdee86b7-f0b4-4bad-be1c-7b8d85e90d48 (mariadb) was prepared for execution. 2026-04-05 04:58:13.454965 | orchestrator | 2026-04-05 04:58:13 | INFO  | It takes a moment until task cdee86b7-f0b4-4bad-be1c-7b8d85e90d48 (mariadb) has been started and output is visible here. 2026-04-05 04:58:40.518276 | orchestrator | 2026-04-05 04:58:40.518359 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 04:58:40.518367 | orchestrator | 2026-04-05 04:58:40.518374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 04:58:40.518380 | orchestrator | Sunday 05 April 2026 04:58:18 +0000 (0:00:01.902) 0:00:01.902 ********** 2026-04-05 04:58:40.518385 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:58:40.518391 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:58:40.518396 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:58:40.518401 | orchestrator | 2026-04-05 04:58:40.518407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 04:58:40.518412 | orchestrator | Sunday 05 April 2026 04:58:20 +0000 (0:00:01.843) 0:00:03.748 ********** 2026-04-05 04:58:40.518417 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 04:58:40.518422 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 04:58:40.518427 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 04:58:40.518432 | orchestrator | 2026-04-05 04:58:40.518437 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 04:58:40.518442 | orchestrator | 2026-04-05 04:58:40.518447 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 04:58:40.518453 | orchestrator | Sunday 05 April 2026 04:58:22 +0000 (0:00:02.276) 0:00:06.024 ********** 2026-04-05 04:58:40.518458 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 04:58:40.518463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 04:58:40.518468 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 04:58:40.518473 | orchestrator | 2026-04-05 04:58:40.518478 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 04:58:40.518483 | orchestrator | Sunday 05 April 2026 04:58:24 +0000 (0:00:01.792) 0:00:07.817 ********** 2026-04-05 04:58:40.518489 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:58:40.518494 | orchestrator | 2026-04-05 04:58:40.518500 | orchestrator | TASK [mariadb : Remove mariadb-clustercheck] *********************************** 2026-04-05 04:58:40.518505 | orchestrator | Sunday 05 April 2026 04:58:26 +0000 (0:00:02.377) 0:00:10.194 ********** 2026-04-05 04:58:40.518528 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:58:40.518533 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:58:40.518539 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:58:40.518575 | orchestrator | 2026-04-05 04:58:40.518582 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-05 04:58:40.518587 | orchestrator | Sunday 05 April 2026 04:58:29 +0000 (0:00:02.794) 0:00:12.989 ********** 2026-04-05 04:58:40.518602 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:58:40.518623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:58:40.518638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:58:40.518644 | orchestrator | 2026-04-05 04:58:40.518649 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-05 04:58:40.518655 | orchestrator | Sunday 05 April 2026 04:58:33 +0000 (0:00:03.822) 0:00:16.811 ********** 2026-04-05 04:58:40.518660 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:58:40.518666 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:58:40.518671 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:58:40.518676 | orchestrator | 2026-04-05 04:58:40.518681 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-05 04:58:40.518688 | orchestrator | Sunday 05 April 2026 04:58:35 +0000 (0:00:01.580) 0:00:18.392 ********** 2026-04-05 04:58:40.518696 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:58:40.518704 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:58:40.518712 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:58:40.518718 | orchestrator | 2026-04-05 04:58:40.518724 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-05 04:58:40.518734 | orchestrator | Sunday 05 April 2026 04:58:37 +0000 (0:00:02.245) 0:00:20.638 ********** 2026-04-05 04:58:40.518744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:58:53.049695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:58:53.049851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:58:53.049897 | orchestrator | 2026-04-05 04:58:53.049913 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-05 04:58:53.049927 | orchestrator | Sunday 05 April 2026 04:58:41 +0000 (0:00:04.350) 0:00:24.988 ********** 2026-04-05 04:58:53.049938 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:58:53.049951 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:58:53.049961 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:58:53.049973 | orchestrator | 2026-04-05 04:58:53.049985 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-05 04:58:53.050085 | orchestrator | Sunday 05 April 2026 04:58:43 +0000 (0:00:02.073) 0:00:27.062 ********** 2026-04-05 04:58:53.050101 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:58:53.050114 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:58:53.050139 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:58:53.050152 | orchestrator | 2026-04-05 04:58:53.050165 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 04:58:53.050178 | orchestrator | Sunday 05 April 2026 04:58:48 +0000 (0:00:04.958) 0:00:32.020 ********** 2026-04-05 04:58:53.050193 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 04:58:53.050205 | orchestrator | 2026-04-05 04:58:53.050219 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-05 04:58:53.050232 | orchestrator | Sunday 05 April 2026 04:58:50 +0000 (0:00:01.762) 0:00:33.783 ********** 2026-04-05 04:58:53.050255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:58:53.050271 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:58:53.050295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:00.298770 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:00.298925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:00.298943 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:00.298952 | orchestrator | 2026-04-05 04:59:00.298964 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-05 04:59:00.298976 | orchestrator | Sunday 05 April 2026 04:58:54 +0000 (0:00:03.957) 0:00:37.740 ********** 2026-04-05 04:59:00.298988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:00.299022 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:00.299060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:00.299072 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:00.299082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:00.299098 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:00.299108 | orchestrator | 2026-04-05 04:59:00.299118 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-05 04:59:00.299128 | orchestrator | Sunday 05 April 2026 04:58:58 +0000 (0:00:03.583) 0:00:41.324 ********** 2026-04-05 04:59:00.299152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:05.180939 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:05.181025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:05.181050 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:05.181068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:05.181074 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:05.181079 | orchestrator | 2026-04-05 04:59:05.181085 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-05 04:59:05.181091 | orchestrator | Sunday 05 April 2026 04:59:02 +0000 (0:00:03.920) 0:00:45.244 ********** 2026-04-05 04:59:05.181108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:59:05.181122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:59:05.181133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-05 04:59:20.403967 | orchestrator | 2026-04-05 04:59:20.404085 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-05 04:59:20.404103 | orchestrator | Sunday 05 April 2026 04:59:06 +0000 (0:00:04.234) 0:00:49.479 ********** 2026-04-05 04:59:20.404115 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 04:59:20.404128 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:59:20.404138 | orchestrator | } 2026-04-05 04:59:20.404149 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 04:59:20.404160 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:59:20.404171 | orchestrator | } 2026-04-05 04:59:20.404183 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 04:59:20.404194 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 04:59:20.404204 | orchestrator | } 2026-04-05 04:59:20.404215 | orchestrator | 2026-04-05 04:59:20.404226 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 04:59:20.404237 | orchestrator | Sunday 05 April 2026 04:59:07 +0000 (0:00:01.431) 0:00:50.910 ********** 2026-04-05 04:59:20.404269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:20.404309 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:20.404355 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:20.404367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:20.404379 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:20.404408 | orchestrator | 2026-04-05 04:59:20.404430 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-05 04:59:20.404441 | orchestrator | Sunday 05 April 2026 04:59:11 +0000 (0:00:03.786) 0:00:54.696 ********** 2026-04-05 04:59:20.404452 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404463 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:20.404473 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:20.404484 | orchestrator | 2026-04-05 04:59:20.404494 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-05 04:59:20.404505 | orchestrator | Sunday 05 April 2026 04:59:13 +0000 (0:00:01.579) 0:00:56.276 ********** 2026-04-05 04:59:20.404515 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404526 | orchestrator | 2026-04-05 04:59:20.404536 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-05 04:59:20.404573 | orchestrator | Sunday 05 April 2026 04:59:14 +0000 (0:00:01.101) 0:00:57.378 ********** 2026-04-05 04:59:20.404584 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404595 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:20.404605 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:20.404616 | orchestrator | 2026-04-05 04:59:20.404626 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-05 04:59:20.404637 | orchestrator | Sunday 05 April 2026 04:59:15 +0000 (0:00:01.545) 0:00:58.923 ********** 2026-04-05 04:59:20.404647 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404658 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:20.404669 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:20.404679 | orchestrator | 2026-04-05 04:59:20.404697 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-05 04:59:20.404714 | orchestrator | Sunday 05 April 2026 04:59:17 +0000 (0:00:01.365) 0:01:00.288 ********** 2026-04-05 04:59:20.404731 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404749 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:20.404768 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:20.404785 | orchestrator | 2026-04-05 04:59:20.404800 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-05 04:59:20.404811 | orchestrator | Sunday 05 April 2026 04:59:18 +0000 (0:00:01.573) 0:01:01.862 ********** 2026-04-05 04:59:20.404822 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404832 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:20.404842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:20.404853 | orchestrator | 2026-04-05 04:59:20.404863 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-05 04:59:20.404874 | orchestrator | Sunday 05 April 2026 04:59:19 +0000 (0:00:01.341) 0:01:03.203 ********** 2026-04-05 04:59:20.404884 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:20.404895 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:20.404905 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:20.404916 | orchestrator | 2026-04-05 04:59:20.404935 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-05 04:59:38.052754 | orchestrator | Sunday 05 April 2026 04:59:21 +0000 (0:00:01.444) 0:01:04.648 ********** 2026-04-05 04:59:38.052864 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.052881 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.052891 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.052900 | orchestrator | 2026-04-05 04:59:38.052911 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-05 04:59:38.052921 | orchestrator | Sunday 05 April 2026 04:59:22 +0000 (0:00:01.360) 0:01:06.008 ********** 2026-04-05 04:59:38.052930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 04:59:38.052938 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 04:59:38.052947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 04:59:38.052955 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.052963 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 04:59:38.052992 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 04:59:38.053002 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 04:59:38.053010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053018 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 04:59:38.053026 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 04:59:38.053035 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 04:59:38.053043 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053052 | orchestrator | 2026-04-05 04:59:38.053060 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-05 04:59:38.053069 | orchestrator | Sunday 05 April 2026 04:59:24 +0000 (0:00:01.603) 0:01:07.612 ********** 2026-04-05 04:59:38.053077 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053086 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053094 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053102 | orchestrator | 2026-04-05 04:59:38.053110 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-05 04:59:38.053119 | orchestrator | Sunday 05 April 2026 04:59:25 +0000 (0:00:01.487) 0:01:09.100 ********** 2026-04-05 04:59:38.053128 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053135 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053144 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053153 | orchestrator | 2026-04-05 04:59:38.053162 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-05 04:59:38.053170 | orchestrator | Sunday 05 April 2026 04:59:27 +0000 (0:00:01.378) 0:01:10.478 ********** 2026-04-05 04:59:38.053178 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053187 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053196 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053204 | orchestrator | 2026-04-05 04:59:38.053212 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-05 04:59:38.053221 | orchestrator | Sunday 05 April 2026 04:59:28 +0000 (0:00:01.430) 0:01:11.909 ********** 2026-04-05 04:59:38.053229 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053239 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053247 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053255 | orchestrator | 2026-04-05 04:59:38.053278 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-05 04:59:38.053288 | orchestrator | Sunday 05 April 2026 04:59:30 +0000 (0:00:01.404) 0:01:13.314 ********** 2026-04-05 04:59:38.053296 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053305 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053313 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053321 | orchestrator | 2026-04-05 04:59:38.053330 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-05 04:59:38.053340 | orchestrator | Sunday 05 April 2026 04:59:31 +0000 (0:00:01.334) 0:01:14.648 ********** 2026-04-05 04:59:38.053349 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053358 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053366 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053374 | orchestrator | 2026-04-05 04:59:38.053383 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-05 04:59:38.053391 | orchestrator | Sunday 05 April 2026 04:59:32 +0000 (0:00:01.375) 0:01:16.024 ********** 2026-04-05 04:59:38.053400 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053409 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053418 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053425 | orchestrator | 2026-04-05 04:59:38.053434 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-05 04:59:38.053442 | orchestrator | Sunday 05 April 2026 04:59:34 +0000 (0:00:01.576) 0:01:17.601 ********** 2026-04-05 04:59:38.053452 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053467 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:38.053476 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053484 | orchestrator | 2026-04-05 04:59:38.053494 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-05 04:59:38.053502 | orchestrator | Sunday 05 April 2026 04:59:35 +0000 (0:00:01.376) 0:01:18.977 ********** 2026-04-05 04:59:38.053533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:38.053545 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:38.053581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:38.053598 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:38.053615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:55.835353 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.835464 | orchestrator | 2026-04-05 04:59:55.835482 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-05 04:59:55.835496 | orchestrator | Sunday 05 April 2026 04:59:39 +0000 (0:00:03.444) 0:01:22.422 ********** 2026-04-05 04:59:55.835507 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:55.835518 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.835529 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:55.835540 | orchestrator | 2026-04-05 04:59:55.835552 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-05 04:59:55.835610 | orchestrator | Sunday 05 April 2026 04:59:40 +0000 (0:00:01.428) 0:01:23.850 ********** 2026-04-05 04:59:55.835643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:55.835681 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:55.835714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:55.835727 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.835739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-05 04:59:55.835759 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:55.835770 | orchestrator | 2026-04-05 04:59:55.835781 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-05 04:59:55.835792 | orchestrator | Sunday 05 April 2026 04:59:44 +0000 (0:00:03.522) 0:01:27.373 ********** 2026-04-05 04:59:55.835803 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:55.835814 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.835825 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:55.835835 | orchestrator | 2026-04-05 04:59:55.835846 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-05 04:59:55.835857 | orchestrator | Sunday 05 April 2026 04:59:45 +0000 (0:00:01.789) 0:01:29.162 ********** 2026-04-05 04:59:55.835867 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:55.835878 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.835889 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:55.835903 | orchestrator | 2026-04-05 04:59:55.835916 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-05 04:59:55.835929 | orchestrator | Sunday 05 April 2026 04:59:47 +0000 (0:00:01.332) 0:01:30.495 ********** 2026-04-05 04:59:55.835941 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:55.835954 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.835966 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:55.835979 | orchestrator | 2026-04-05 04:59:55.835991 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-05 04:59:55.836003 | orchestrator | Sunday 05 April 2026 04:59:48 +0000 (0:00:01.372) 0:01:31.867 ********** 2026-04-05 04:59:55.836016 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:55.836028 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.836040 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:55.836052 | orchestrator | 2026-04-05 04:59:55.836064 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-05 04:59:55.836077 | orchestrator | Sunday 05 April 2026 04:59:50 +0000 (0:00:01.773) 0:01:33.641 ********** 2026-04-05 04:59:55.836090 | orchestrator | skipping: [testbed-node-0] 2026-04-05 04:59:55.836103 | orchestrator | skipping: [testbed-node-1] 2026-04-05 04:59:55.836116 | orchestrator | skipping: [testbed-node-2] 2026-04-05 04:59:55.836128 | orchestrator | 2026-04-05 04:59:55.836140 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-05 04:59:55.836152 | orchestrator | Sunday 05 April 2026 04:59:52 +0000 (0:00:01.689) 0:01:35.331 ********** 2026-04-05 04:59:55.836164 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:59:55.836177 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:59:55.836189 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:59:55.836201 | orchestrator | 2026-04-05 04:59:55.836214 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-05 04:59:55.836227 | orchestrator | Sunday 05 April 2026 04:59:54 +0000 (0:00:02.086) 0:01:37.417 ********** 2026-04-05 04:59:55.836241 | orchestrator | ok: [testbed-node-0] 2026-04-05 04:59:55.836252 | orchestrator | ok: [testbed-node-1] 2026-04-05 04:59:55.836262 | orchestrator | ok: [testbed-node-2] 2026-04-05 04:59:55.836273 | orchestrator | 2026-04-05 04:59:55.836284 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-05 04:59:55.836295 | orchestrator | Sunday 05 April 2026 04:59:55 +0000 (0:00:01.489) 0:01:38.907 ********** 2026-04-05 04:59:55.836312 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.119774 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.119877 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.119889 | orchestrator | 2026-04-05 05:02:35.119899 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-05 05:02:35.119933 | orchestrator | Sunday 05 April 2026 04:59:57 +0000 (0:00:01.433) 0:01:40.340 ********** 2026-04-05 05:02:35.119941 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.119949 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.119957 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.119964 | orchestrator | 2026-04-05 05:02:35.119972 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-05 05:02:35.119979 | orchestrator | Sunday 05 April 2026 04:59:58 +0000 (0:00:01.762) 0:01:42.103 ********** 2026-04-05 05:02:35.119986 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.119993 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.120002 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.120009 | orchestrator | 2026-04-05 05:02:35.120016 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-05 05:02:35.120023 | orchestrator | Sunday 05 April 2026 05:00:00 +0000 (0:00:01.770) 0:01:43.874 ********** 2026-04-05 05:02:35.120031 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:02:35.120039 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.120046 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.120053 | orchestrator | 2026-04-05 05:02:35.120061 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-05 05:02:35.120068 | orchestrator | Sunday 05 April 2026 05:00:02 +0000 (0:00:01.397) 0:01:45.272 ********** 2026-04-05 05:02:35.120075 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.120082 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.120090 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.120096 | orchestrator | 2026-04-05 05:02:35.120150 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-05 05:02:35.120159 | orchestrator | Sunday 05 April 2026 05:00:05 +0000 (0:00:03.606) 0:01:48.879 ********** 2026-04-05 05:02:35.120168 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.120176 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.120185 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.120193 | orchestrator | 2026-04-05 05:02:35.120200 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-05 05:02:35.120207 | orchestrator | Sunday 05 April 2026 05:00:07 +0000 (0:00:01.495) 0:01:50.374 ********** 2026-04-05 05:02:35.120215 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.120222 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.120229 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.120237 | orchestrator | 2026-04-05 05:02:35.120245 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-05 05:02:35.120254 | orchestrator | Sunday 05 April 2026 05:00:08 +0000 (0:00:01.591) 0:01:51.965 ********** 2026-04-05 05:02:35.120260 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:02:35.120268 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.120276 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.120283 | orchestrator | 2026-04-05 05:02:35.120290 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 05:02:35.120298 | orchestrator | Sunday 05 April 2026 05:00:10 +0000 (0:00:01.725) 0:01:53.691 ********** 2026-04-05 05:02:35.120305 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:02:35.120312 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.120319 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.120326 | orchestrator | 2026-04-05 05:02:35.120333 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 05:02:35.120340 | orchestrator | Sunday 05 April 2026 05:00:11 +0000 (0:00:01.393) 0:01:55.084 ********** 2026-04-05 05:02:35.120348 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:02:35.120355 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.120362 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.120369 | orchestrator | 2026-04-05 05:02:35.120376 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-05 05:02:35.120383 | orchestrator | Sunday 05 April 2026 05:00:13 +0000 (0:00:01.758) 0:01:56.843 ********** 2026-04-05 05:02:35.120398 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:02:35.120405 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:02:35.120413 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:02:35.120420 | orchestrator | 2026-04-05 05:02:35.120427 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-05 05:02:35.120434 | orchestrator | Sunday 05 April 2026 05:00:15 +0000 (0:00:01.413) 0:01:58.256 ********** 2026-04-05 05:02:35.120441 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:02:35.120448 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.120455 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.120462 | orchestrator | 2026-04-05 05:02:35.120469 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 05:02:35.120475 | orchestrator | 2026-04-05 05:02:35.120482 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 05:02:35.120489 | orchestrator | Sunday 05 April 2026 05:00:16 +0000 (0:00:01.801) 0:02:00.058 ********** 2026-04-05 05:02:35.120496 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:02:35.120503 | orchestrator | 2026-04-05 05:02:35.120510 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 05:02:35.120517 | orchestrator | Sunday 05 April 2026 05:00:43 +0000 (0:00:26.386) 0:02:26.445 ********** 2026-04-05 05:02:35.120524 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.120531 | orchestrator | 2026-04-05 05:02:35.120538 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 05:02:35.120545 | orchestrator | Sunday 05 April 2026 05:00:48 +0000 (0:00:05.579) 0:02:32.024 ********** 2026-04-05 05:02:35.120552 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.120559 | orchestrator | 2026-04-05 05:02:35.120567 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 05:02:35.120574 | orchestrator | 2026-04-05 05:02:35.120580 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 05:02:35.120587 | orchestrator | Sunday 05 April 2026 05:00:51 +0000 (0:00:02.869) 0:02:34.894 ********** 2026-04-05 05:02:35.120594 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:02:35.120626 | orchestrator | 2026-04-05 05:02:35.120633 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 05:02:35.120657 | orchestrator | Sunday 05 April 2026 05:01:18 +0000 (0:00:26.405) 0:03:01.300 ********** 2026-04-05 05:02:35.120665 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-04-05 05:02:35.120673 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.120680 | orchestrator | 2026-04-05 05:02:35.120687 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 05:02:35.120694 | orchestrator | Sunday 05 April 2026 05:01:26 +0000 (0:00:08.028) 0:03:09.329 ********** 2026-04-05 05:02:35.120701 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.120708 | orchestrator | 2026-04-05 05:02:35.120714 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 05:02:35.120721 | orchestrator | 2026-04-05 05:02:35.120728 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-05 05:02:35.120735 | orchestrator | Sunday 05 April 2026 05:01:29 +0000 (0:00:03.005) 0:03:12.334 ********** 2026-04-05 05:02:35.120742 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:02:35.120749 | orchestrator | 2026-04-05 05:02:35.120756 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-05 05:02:35.120763 | orchestrator | Sunday 05 April 2026 05:01:54 +0000 (0:00:24.972) 0:03:37.307 ********** 2026-04-05 05:02:35.120770 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-04-05 05:02:35.120777 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.120784 | orchestrator | 2026-04-05 05:02:35.120792 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-05 05:02:35.120799 | orchestrator | Sunday 05 April 2026 05:02:02 +0000 (0:00:08.082) 0:03:45.389 ********** 2026-04-05 05:02:35.120812 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.120819 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-05 05:02:35.120826 | orchestrator | 2026-04-05 05:02:35.120832 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 05:02:35.120844 | orchestrator | skipping: no hosts matched 2026-04-05 05:02:35.120851 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 05:02:35.120858 | orchestrator | mariadb_bootstrap_restart 2026-04-05 05:02:35.120865 | orchestrator | 2026-04-05 05:02:35.120872 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 05:02:35.120878 | orchestrator | skipping: no hosts matched 2026-04-05 05:02:35.120886 | orchestrator | 2026-04-05 05:02:35.120893 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 05:02:35.120900 | orchestrator | 2026-04-05 05:02:35.120907 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 05:02:35.120914 | orchestrator | Sunday 05 April 2026 05:02:06 +0000 (0:00:04.166) 0:03:49.556 ********** 2026-04-05 05:02:35.120921 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:02:35.120927 | orchestrator | 2026-04-05 05:02:35.120934 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-05 05:02:35.120941 | orchestrator | Sunday 05 April 2026 05:02:08 +0000 (0:00:01.688) 0:03:51.245 ********** 2026-04-05 05:02:35.120948 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.120956 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.120962 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.120969 | orchestrator | 2026-04-05 05:02:35.120976 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-05 05:02:35.120983 | orchestrator | Sunday 05 April 2026 05:02:11 +0000 (0:00:03.179) 0:03:54.425 ********** 2026-04-05 05:02:35.120990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.120997 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.121004 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:02:35.121012 | orchestrator | 2026-04-05 05:02:35.121019 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-05 05:02:35.121025 | orchestrator | Sunday 05 April 2026 05:02:14 +0000 (0:00:03.227) 0:03:57.653 ********** 2026-04-05 05:02:35.121033 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.121040 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.121047 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.121055 | orchestrator | 2026-04-05 05:02:35.121062 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-05 05:02:35.121069 | orchestrator | Sunday 05 April 2026 05:02:17 +0000 (0:00:03.146) 0:04:00.799 ********** 2026-04-05 05:02:35.121076 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.121083 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.121091 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:02:35.121098 | orchestrator | 2026-04-05 05:02:35.121106 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-05 05:02:35.121113 | orchestrator | Sunday 05 April 2026 05:02:20 +0000 (0:00:03.074) 0:04:03.874 ********** 2026-04-05 05:02:35.121121 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.121128 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.121135 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.121142 | orchestrator | 2026-04-05 05:02:35.121150 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-05 05:02:35.121157 | orchestrator | Sunday 05 April 2026 05:02:27 +0000 (0:00:06.684) 0:04:10.558 ********** 2026-04-05 05:02:35.121164 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:02:35.121172 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.121178 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.121185 | orchestrator | 2026-04-05 05:02:35.121193 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-05 05:02:35.121206 | orchestrator | Sunday 05 April 2026 05:02:30 +0000 (0:00:03.459) 0:04:14.018 ********** 2026-04-05 05:02:35.121214 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:02:35.121222 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:02:35.121229 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:02:35.121236 | orchestrator | 2026-04-05 05:02:35.121243 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-05 05:02:35.121250 | orchestrator | Sunday 05 April 2026 05:02:32 +0000 (0:00:01.396) 0:04:15.415 ********** 2026-04-05 05:02:35.121257 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:02:35.121264 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:02:35.121271 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:02:35.121278 | orchestrator | 2026-04-05 05:02:35.121295 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 05:02:57.910903 | orchestrator | Sunday 05 April 2026 05:02:35 +0000 (0:00:03.651) 0:04:19.066 ********** 2026-04-05 05:02:57.911020 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:02:57.911037 | orchestrator | 2026-04-05 05:02:57.911050 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-04-05 05:02:57.911062 | orchestrator | Sunday 05 April 2026 05:02:37 +0000 (0:00:01.939) 0:04:21.006 ********** 2026-04-05 05:02:57.911073 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:02:57.911086 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:02:57.911097 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:02:57.911108 | orchestrator | 2026-04-05 05:02:57.911119 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 05:02:57.911131 | orchestrator | testbed-node-0 : ok=35  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-05 05:02:57.911143 | orchestrator | testbed-node-1 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-05 05:02:57.911154 | orchestrator | testbed-node-2 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-05 05:02:57.911165 | orchestrator | 2026-04-05 05:02:57.911176 | orchestrator | 2026-04-05 05:02:57.911187 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 05:02:57.911198 | orchestrator | Sunday 05 April 2026 05:02:57 +0000 (0:00:19.703) 0:04:40.710 ********** 2026-04-05 05:02:57.911209 | orchestrator | =============================================================================== 2026-04-05 05:02:57.911238 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 77.77s 2026-04-05 05:02:57.911249 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 21.69s 2026-04-05 05:02:57.911260 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 19.70s 2026-04-05 05:02:57.911272 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.04s 2026-04-05 05:02:57.911282 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.68s 2026-04-05 05:02:57.911293 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.96s 2026-04-05 05:02:57.911303 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.35s 2026-04-05 05:02:57.911314 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.24s 2026-04-05 05:02:57.911325 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.96s 2026-04-05 05:02:57.911335 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.92s 2026-04-05 05:02:57.911346 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.82s 2026-04-05 05:02:57.911356 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.79s 2026-04-05 05:02:57.911367 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.65s 2026-04-05 05:02:57.911402 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.61s 2026-04-05 05:02:57.911416 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.58s 2026-04-05 05:02:57.911429 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.52s 2026-04-05 05:02:57.911443 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.46s 2026-04-05 05:02:57.911456 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.45s 2026-04-05 05:02:57.911468 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.23s 2026-04-05 05:02:57.911482 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.18s 2026-04-05 05:02:58.121163 | orchestrator | + osism apply -a upgrade rabbitmq 2026-04-05 05:02:59.668039 | orchestrator | 2026-04-05 05:02:59 | INFO  | Prepare task for execution of rabbitmq. 2026-04-05 05:02:59.744086 | orchestrator | 2026-04-05 05:02:59 | INFO  | Task d5057a25-925d-44ca-b3b0-e4fc61db119a (rabbitmq) was prepared for execution. 2026-04-05 05:02:59.744190 | orchestrator | 2026-04-05 05:02:59 | INFO  | It takes a moment until task d5057a25-925d-44ca-b3b0-e4fc61db119a (rabbitmq) has been started and output is visible here. 2026-04-05 05:03:42.851875 | orchestrator | 2026-04-05 05:03:42.851963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 05:03:42.851973 | orchestrator | 2026-04-05 05:03:42.851981 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 05:03:42.851988 | orchestrator | Sunday 05 April 2026 05:03:04 +0000 (0:00:01.504) 0:00:01.504 ********** 2026-04-05 05:03:42.851995 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:03:42.852003 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:03:42.852010 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:03:42.852016 | orchestrator | 2026-04-05 05:03:42.852023 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 05:03:42.852031 | orchestrator | Sunday 05 April 2026 05:03:06 +0000 (0:00:01.979) 0:00:03.483 ********** 2026-04-05 05:03:42.852038 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-05 05:03:42.852045 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-05 05:03:42.852051 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-05 05:03:42.852058 | orchestrator | 2026-04-05 05:03:42.852064 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-05 05:03:42.852071 | orchestrator | 2026-04-05 05:03:42.852077 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 05:03:42.852084 | orchestrator | Sunday 05 April 2026 05:03:09 +0000 (0:00:02.505) 0:00:05.989 ********** 2026-04-05 05:03:42.852091 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:03:42.852098 | orchestrator | 2026-04-05 05:03:42.852105 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 05:03:42.852113 | orchestrator | Sunday 05 April 2026 05:03:12 +0000 (0:00:03.171) 0:00:09.161 ********** 2026-04-05 05:03:42.852124 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:03:42.852180 | orchestrator | 2026-04-05 05:03:42.852192 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-05 05:03:42.852203 | orchestrator | Sunday 05 April 2026 05:03:15 +0000 (0:00:02.777) 0:00:11.939 ********** 2026-04-05 05:03:42.852213 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:03:42.852224 | orchestrator | 2026-04-05 05:03:42.852234 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-05 05:03:42.852246 | orchestrator | Sunday 05 April 2026 05:03:18 +0000 (0:00:02.990) 0:00:14.929 ********** 2026-04-05 05:03:42.852256 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:03:42.852267 | orchestrator | 2026-04-05 05:03:42.852278 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-05 05:03:42.852289 | orchestrator | Sunday 05 April 2026 05:03:27 +0000 (0:00:09.623) 0:00:24.553 ********** 2026-04-05 05:03:42.852329 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 05:03:42.852343 | orchestrator |  "changed": false, 2026-04-05 05:03:42.852355 | orchestrator |  "msg": "All assertions passed" 2026-04-05 05:03:42.852363 | orchestrator | } 2026-04-05 05:03:42.852371 | orchestrator | 2026-04-05 05:03:42.852378 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-05 05:03:42.852398 | orchestrator | Sunday 05 April 2026 05:03:29 +0000 (0:00:01.395) 0:00:25.948 ********** 2026-04-05 05:03:42.852405 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 05:03:42.852412 | orchestrator |  "changed": false, 2026-04-05 05:03:42.852420 | orchestrator |  "msg": "All assertions passed" 2026-04-05 05:03:42.852429 | orchestrator | } 2026-04-05 05:03:42.852437 | orchestrator | 2026-04-05 05:03:42.852446 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 05:03:42.852454 | orchestrator | Sunday 05 April 2026 05:03:30 +0000 (0:00:01.764) 0:00:27.713 ********** 2026-04-05 05:03:42.852464 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:03:42.852472 | orchestrator | 2026-04-05 05:03:42.852480 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-05 05:03:42.852489 | orchestrator | Sunday 05 April 2026 05:03:32 +0000 (0:00:01.927) 0:00:29.640 ********** 2026-04-05 05:03:42.852500 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:03:42.852513 | orchestrator | 2026-04-05 05:03:42.852525 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-05 05:03:42.852537 | orchestrator | Sunday 05 April 2026 05:03:34 +0000 (0:00:02.264) 0:00:31.905 ********** 2026-04-05 05:03:42.852549 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:03:42.852560 | orchestrator | 2026-04-05 05:03:42.852573 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-05 05:03:42.852585 | orchestrator | Sunday 05 April 2026 05:03:37 +0000 (0:00:02.787) 0:00:34.692 ********** 2026-04-05 05:03:42.852598 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:03:42.852611 | orchestrator | 2026-04-05 05:03:42.852622 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-05 05:03:42.852631 | orchestrator | Sunday 05 April 2026 05:03:39 +0000 (0:00:01.602) 0:00:36.294 ********** 2026-04-05 05:03:42.852665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:03:42.852678 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:03:42.852701 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:03:42.852711 | orchestrator | 2026-04-05 05:03:42.852720 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-05 05:03:42.852729 | orchestrator | Sunday 05 April 2026 05:03:41 +0000 (0:00:02.121) 0:00:38.416 ********** 2026-04-05 05:03:42.852739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:03:42.852755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:04:03.006241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:04:03.006464 | orchestrator | 2026-04-05 05:04:03.006488 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-05 05:04:03.006506 | orchestrator | Sunday 05 April 2026 05:03:43 +0000 (0:00:02.453) 0:00:40.869 ********** 2026-04-05 05:04:03.006522 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 05:04:03.006537 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 05:04:03.006567 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-05 05:04:03.006582 | orchestrator | 2026-04-05 05:04:03.006597 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-05 05:04:03.006613 | orchestrator | Sunday 05 April 2026 05:03:46 +0000 (0:00:02.383) 0:00:43.253 ********** 2026-04-05 05:04:03.006628 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 05:04:03.006642 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 05:04:03.006657 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-05 05:04:03.006672 | orchestrator | 2026-04-05 05:04:03.006685 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-05 05:04:03.006699 | orchestrator | Sunday 05 April 2026 05:03:49 +0000 (0:00:02.742) 0:00:45.996 ********** 2026-04-05 05:04:03.006715 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 05:04:03.006729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 05:04:03.006744 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-05 05:04:03.006758 | orchestrator | 2026-04-05 05:04:03.006769 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-05 05:04:03.006780 | orchestrator | Sunday 05 April 2026 05:03:51 +0000 (0:00:02.354) 0:00:48.350 ********** 2026-04-05 05:04:03.006790 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 05:04:03.006800 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 05:04:03.006810 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-05 05:04:03.006820 | orchestrator | 2026-04-05 05:04:03.006829 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-05 05:04:03.006839 | orchestrator | Sunday 05 April 2026 05:03:54 +0000 (0:00:02.640) 0:00:50.991 ********** 2026-04-05 05:04:03.006849 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 05:04:03.006859 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 05:04:03.006877 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-05 05:04:03.006886 | orchestrator | 2026-04-05 05:04:03.006896 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-05 05:04:03.006905 | orchestrator | Sunday 05 April 2026 05:03:56 +0000 (0:00:02.305) 0:00:53.297 ********** 2026-04-05 05:04:03.006915 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 05:04:03.006924 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 05:04:03.006934 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-05 05:04:03.006943 | orchestrator | 2026-04-05 05:04:03.006952 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-05 05:04:03.006962 | orchestrator | Sunday 05 April 2026 05:03:58 +0000 (0:00:02.255) 0:00:55.552 ********** 2026-04-05 05:04:03.006972 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:04:03.006981 | orchestrator | 2026-04-05 05:04:03.007008 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-05 05:04:03.007019 | orchestrator | Sunday 05 April 2026 05:04:00 +0000 (0:00:01.855) 0:00:57.408 ********** 2026-04-05 05:04:03.007030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:04:03.007047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:04:03.007059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:04:03.007076 | orchestrator | 2026-04-05 05:04:03.007086 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-05 05:04:03.007095 | orchestrator | Sunday 05 April 2026 05:04:02 +0000 (0:00:02.383) 0:00:59.791 ********** 2026-04-05 05:04:03.007113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:04:10.906648 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:04:10.906762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:04:10.906778 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:04:10.906787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:04:10.906815 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:04:10.906823 | orchestrator | 2026-04-05 05:04:10.906831 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-05 05:04:10.906840 | orchestrator | Sunday 05 April 2026 05:04:04 +0000 (0:00:01.443) 0:01:01.235 ********** 2026-04-05 05:04:10.906848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:04:10.906856 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:04:10.906878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:04:10.906887 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:04:10.906899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:04:10.906906 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:04:10.906914 | orchestrator | 2026-04-05 05:04:10.906927 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-05 05:04:10.906935 | orchestrator | Sunday 05 April 2026 05:04:06 +0000 (0:00:01.901) 0:01:03.136 ********** 2026-04-05 05:04:10.906942 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:04:10.906950 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:04:10.906957 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:04:10.906964 | orchestrator | 2026-04-05 05:04:10.906972 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-05 05:04:10.906979 | orchestrator | Sunday 05 April 2026 05:04:09 +0000 (0:00:03.621) 0:01:06.758 ********** 2026-04-05 05:04:10.906987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:04:10.907001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:05:55.017829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-05 05:05:55.017952 | orchestrator | 2026-04-05 05:05:55.017971 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-05 05:05:55.018010 | orchestrator | Sunday 05 April 2026 05:04:12 +0000 (0:00:02.164) 0:01:08.922 ********** 2026-04-05 05:05:55.018147 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 05:05:55.018161 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:05:55.018173 | orchestrator | } 2026-04-05 05:05:55.018184 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 05:05:55.018195 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:05:55.018205 | orchestrator | } 2026-04-05 05:05:55.018216 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 05:05:55.018227 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:05:55.018238 | orchestrator | } 2026-04-05 05:05:55.018249 | orchestrator | 2026-04-05 05:05:55.018260 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 05:05:55.018271 | orchestrator | Sunday 05 April 2026 05:04:13 +0000 (0:00:01.574) 0:01:10.496 ********** 2026-04-05 05:05:55.018285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:05:55.018299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:05:55.018312 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:05:55.018323 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:05:55.018362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-05 05:05:55.018389 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:05:55.018403 | orchestrator | 2026-04-05 05:05:55.018417 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-05 05:05:55.018430 | orchestrator | Sunday 05 April 2026 05:04:15 +0000 (0:00:02.022) 0:01:12.519 ********** 2026-04-05 05:05:55.018444 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:05:55.018457 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:05:55.018471 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:05:55.018484 | orchestrator | 2026-04-05 05:05:55.018499 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 05:05:55.018512 | orchestrator | 2026-04-05 05:05:55.018526 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 05:05:55.018539 | orchestrator | Sunday 05 April 2026 05:04:17 +0000 (0:00:01.730) 0:01:14.249 ********** 2026-04-05 05:05:55.018553 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:05:55.018567 | orchestrator | 2026-04-05 05:05:55.018580 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 05:05:55.018591 | orchestrator | Sunday 05 April 2026 05:04:19 +0000 (0:00:02.090) 0:01:16.339 ********** 2026-04-05 05:05:55.018602 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:05:55.018613 | orchestrator | 2026-04-05 05:05:55.018624 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 05:05:55.018635 | orchestrator | Sunday 05 April 2026 05:04:27 +0000 (0:00:08.506) 0:01:24.846 ********** 2026-04-05 05:05:55.018645 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:05:55.018656 | orchestrator | 2026-04-05 05:05:55.018667 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 05:05:55.018678 | orchestrator | Sunday 05 April 2026 05:04:36 +0000 (0:00:08.981) 0:01:33.827 ********** 2026-04-05 05:05:55.018689 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:05:55.018700 | orchestrator | 2026-04-05 05:05:55.018710 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 05:05:55.018721 | orchestrator | 2026-04-05 05:05:55.018732 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 05:05:55.018743 | orchestrator | Sunday 05 April 2026 05:04:45 +0000 (0:00:08.805) 0:01:42.633 ********** 2026-04-05 05:05:55.018754 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:05:55.018765 | orchestrator | 2026-04-05 05:05:55.018775 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 05:05:55.018786 | orchestrator | Sunday 05 April 2026 05:04:47 +0000 (0:00:01.692) 0:01:44.325 ********** 2026-04-05 05:05:55.018797 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:05:55.018808 | orchestrator | 2026-04-05 05:05:55.018819 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 05:05:55.018830 | orchestrator | Sunday 05 April 2026 05:04:56 +0000 (0:00:09.401) 0:01:53.726 ********** 2026-04-05 05:05:55.018841 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:05:55.018851 | orchestrator | 2026-04-05 05:05:55.018862 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 05:05:55.018873 | orchestrator | Sunday 05 April 2026 05:05:10 +0000 (0:00:13.954) 0:02:07.681 ********** 2026-04-05 05:05:55.018884 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:05:55.018895 | orchestrator | 2026-04-05 05:05:55.018906 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-05 05:05:55.018916 | orchestrator | 2026-04-05 05:05:55.018927 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-05 05:05:55.018938 | orchestrator | Sunday 05 April 2026 05:05:20 +0000 (0:00:09.635) 0:02:17.316 ********** 2026-04-05 05:05:55.018949 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:05:55.018960 | orchestrator | 2026-04-05 05:05:55.018978 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-05 05:05:55.018988 | orchestrator | Sunday 05 April 2026 05:05:22 +0000 (0:00:01.694) 0:02:19.011 ********** 2026-04-05 05:05:55.018999 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:05:55.019010 | orchestrator | 2026-04-05 05:05:55.019100 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-05 05:05:55.019112 | orchestrator | Sunday 05 April 2026 05:05:31 +0000 (0:00:09.498) 0:02:28.510 ********** 2026-04-05 05:05:55.019123 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:05:55.019134 | orchestrator | 2026-04-05 05:05:55.019144 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-05 05:05:55.019155 | orchestrator | Sunday 05 April 2026 05:05:46 +0000 (0:00:14.409) 0:02:42.920 ********** 2026-04-05 05:05:55.019166 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:05:55.019177 | orchestrator | 2026-04-05 05:05:55.019187 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-05 05:05:55.019198 | orchestrator | 2026-04-05 05:05:55.019209 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-05 05:05:55.019228 | orchestrator | Sunday 05 April 2026 05:05:54 +0000 (0:00:08.985) 0:02:51.906 ********** 2026-04-05 05:06:01.329498 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:06:01.329609 | orchestrator | 2026-04-05 05:06:01.329626 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-05 05:06:01.329638 | orchestrator | Sunday 05 April 2026 05:05:56 +0000 (0:00:01.562) 0:02:53.469 ********** 2026-04-05 05:06:01.329649 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:06:01.329676 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:06:01.329687 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:06:01.329698 | orchestrator | 2026-04-05 05:06:01.329710 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 05:06:01.329722 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 05:06:01.329735 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 05:06:01.329765 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 05:06:01.329776 | orchestrator | 2026-04-05 05:06:01.329789 | orchestrator | 2026-04-05 05:06:01.329808 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 05:06:01.329827 | orchestrator | Sunday 05 April 2026 05:06:00 +0000 (0:00:04.365) 0:02:57.835 ********** 2026-04-05 05:06:01.329845 | orchestrator | =============================================================================== 2026-04-05 05:06:01.329864 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.34s 2026-04-05 05:06:01.329880 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 27.43s 2026-04-05 05:06:01.329896 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 27.41s 2026-04-05 05:06:01.329913 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.62s 2026-04-05 05:06:01.329929 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.48s 2026-04-05 05:06:01.329948 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.37s 2026-04-05 05:06:01.329965 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.62s 2026-04-05 05:06:01.329982 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.17s 2026-04-05 05:06:01.329999 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.99s 2026-04-05 05:06:01.330111 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.79s 2026-04-05 05:06:01.330141 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.78s 2026-04-05 05:06:01.330195 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.74s 2026-04-05 05:06:01.330209 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.64s 2026-04-05 05:06:01.330223 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.51s 2026-04-05 05:06:01.330236 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.45s 2026-04-05 05:06:01.330249 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.38s 2026-04-05 05:06:01.330262 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.38s 2026-04-05 05:06:01.330276 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.35s 2026-04-05 05:06:01.330288 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.31s 2026-04-05 05:06:01.330301 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.26s 2026-04-05 05:06:01.532449 | orchestrator | + osism apply -a upgrade openvswitch 2026-04-05 05:06:02.850522 | orchestrator | 2026-04-05 05:06:02 | INFO  | Prepare task for execution of openvswitch. 2026-04-05 05:06:02.920235 | orchestrator | 2026-04-05 05:06:02 | INFO  | Task 913abc4d-0e06-4aba-9e10-40ff77795044 (openvswitch) was prepared for execution. 2026-04-05 05:06:02.920315 | orchestrator | 2026-04-05 05:06:02 | INFO  | It takes a moment until task 913abc4d-0e06-4aba-9e10-40ff77795044 (openvswitch) has been started and output is visible here. 2026-04-05 05:06:20.301544 | orchestrator | 2026-04-05 05:06:20.301655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 05:06:20.301670 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 05:06:20.301682 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 05:06:20.301702 | orchestrator | 2026-04-05 05:06:20.301712 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 05:06:20.301721 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 05:06:20.301731 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 05:06:20.301751 | orchestrator | Sunday 05 April 2026 05:06:07 +0000 (0:00:01.520) 0:00:01.520 ********** 2026-04-05 05:06:20.301761 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:06:20.301771 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:06:20.301781 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:06:20.301791 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:06:20.301800 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:06:20.301810 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:06:20.301819 | orchestrator | 2026-04-05 05:06:20.301829 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 05:06:20.301839 | orchestrator | Sunday 05 April 2026 05:06:09 +0000 (0:00:01.749) 0:00:03.270 ********** 2026-04-05 05:06:20.301848 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 05:06:20.301858 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 05:06:20.301868 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 05:06:20.301877 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 05:06:20.301887 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 05:06:20.301896 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-05 05:06:20.301906 | orchestrator | 2026-04-05 05:06:20.301931 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-05 05:06:20.301964 | orchestrator | 2026-04-05 05:06:20.301975 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-05 05:06:20.301985 | orchestrator | Sunday 05 April 2026 05:06:10 +0000 (0:00:01.242) 0:00:04.512 ********** 2026-04-05 05:06:20.301995 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 05:06:20.302006 | orchestrator | 2026-04-05 05:06:20.302102 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 05:06:20.302120 | orchestrator | Sunday 05 April 2026 05:06:12 +0000 (0:00:01.978) 0:00:06.491 ********** 2026-04-05 05:06:20.302133 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-05 05:06:20.302145 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-05 05:06:20.302157 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-05 05:06:20.302190 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-05 05:06:20.302201 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-05 05:06:20.302212 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-05 05:06:20.302224 | orchestrator | 2026-04-05 05:06:20.302235 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 05:06:20.302247 | orchestrator | Sunday 05 April 2026 05:06:14 +0000 (0:00:01.658) 0:00:08.150 ********** 2026-04-05 05:06:20.302258 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-05 05:06:20.302269 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-05 05:06:20.302294 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-05 05:06:20.302305 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-05 05:06:20.302326 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-05 05:06:20.302338 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-05 05:06:20.302349 | orchestrator | 2026-04-05 05:06:20.302360 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 05:06:20.302371 | orchestrator | Sunday 05 April 2026 05:06:16 +0000 (0:00:02.149) 0:00:10.299 ********** 2026-04-05 05:06:20.302382 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-05 05:06:20.302392 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:06:20.302403 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-05 05:06:20.302414 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:06:20.302426 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-05 05:06:20.302437 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:06:20.302448 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-05 05:06:20.302458 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:06:20.302467 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-05 05:06:20.302476 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:06:20.302486 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-05 05:06:20.302612 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:06:20.302625 | orchestrator | 2026-04-05 05:06:20.302635 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-05 05:06:20.302644 | orchestrator | Sunday 05 April 2026 05:06:18 +0000 (0:00:01.618) 0:00:11.917 ********** 2026-04-05 05:06:20.302654 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:06:20.302664 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:06:20.302673 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:06:20.302683 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:06:20.302693 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:06:20.302721 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:06:20.302732 | orchestrator | 2026-04-05 05:06:20.302741 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-05 05:06:20.302751 | orchestrator | Sunday 05 April 2026 05:06:19 +0000 (0:00:01.208) 0:00:13.126 ********** 2026-04-05 05:06:20.302764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:20.302799 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:20.302811 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:20.302821 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:20.302831 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:20.302872 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534685 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534818 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534831 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534843 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534894 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534908 | orchestrator | 2026-04-05 05:06:22.534921 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-05 05:06:22.534934 | orchestrator | Sunday 05 April 2026 05:06:20 +0000 (0:00:01.538) 0:00:14.664 ********** 2026-04-05 05:06:22.534945 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534963 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534975 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534986 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:22.534997 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:22.535024 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:26.010777 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:26.010878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:26.010892 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:26.010903 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:26.010934 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:26.010961 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:26.010974 | orchestrator | 2026-04-05 05:06:26.010985 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-05 05:06:26.010997 | orchestrator | Sunday 05 April 2026 05:06:23 +0000 (0:00:02.463) 0:00:17.128 ********** 2026-04-05 05:06:26.011007 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:06:26.011018 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:06:26.011028 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:06:26.011038 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:06:26.011047 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:06:26.011057 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:06:26.011067 | orchestrator | 2026-04-05 05:06:26.011082 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-05 05:06:26.011092 | orchestrator | Sunday 05 April 2026 05:06:24 +0000 (0:00:01.183) 0:00:18.311 ********** 2026-04-05 05:06:26.011102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:26.011113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:26.011131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:26.011141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:26.011159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-05 05:06:27.943465 | orchestrator | 2026-04-05 05:06:27.943478 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-05 05:06:27.943491 | orchestrator | Sunday 05 April 2026 05:06:26 +0000 (0:00:02.208) 0:00:20.520 ********** 2026-04-05 05:06:27.943503 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 05:06:27.943515 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:06:27.943527 | orchestrator | } 2026-04-05 05:06:27.943538 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 05:06:27.943549 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:06:27.943567 | orchestrator | } 2026-04-05 05:06:27.943578 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 05:06:27.943588 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:06:27.943599 | orchestrator | } 2026-04-05 05:06:27.943610 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 05:06:27.943621 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:06:27.943631 | orchestrator | } 2026-04-05 05:06:27.943642 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 05:06:27.943653 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:06:27.943664 | orchestrator | } 2026-04-05 05:06:27.943674 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 05:06:27.943685 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:06:27.943698 | orchestrator | } 2026-04-05 05:06:27.943711 | orchestrator | 2026-04-05 05:06:27.943724 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 05:06:27.943737 | orchestrator | Sunday 05 April 2026 05:06:27 +0000 (0:00:00.679) 0:00:21.200 ********** 2026-04-05 05:06:27.943752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 05:06:27.943766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 05:06:27.943781 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:06:27.943795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 05:06:27.943821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 05:06:51.801922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 05:06:51.802087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 05:06:51.802098 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:06:51.802106 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:06:51.802112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 05:06:51.802118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 05:06:51.802124 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:06:51.802146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 05:06:51.802169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 05:06:51.802204 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:06:51.802211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-05 05:06:51.802224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-05 05:06:51.802230 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:06:51.802236 | orchestrator | 2026-04-05 05:06:51.802243 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 05:06:51.802250 | orchestrator | Sunday 05 April 2026 05:06:29 +0000 (0:00:01.827) 0:00:23.027 ********** 2026-04-05 05:06:51.802255 | orchestrator | 2026-04-05 05:06:51.802261 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 05:06:51.802266 | orchestrator | Sunday 05 April 2026 05:06:29 +0000 (0:00:00.350) 0:00:23.378 ********** 2026-04-05 05:06:51.802272 | orchestrator | 2026-04-05 05:06:51.802277 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 05:06:51.802282 | orchestrator | Sunday 05 April 2026 05:06:29 +0000 (0:00:00.148) 0:00:23.527 ********** 2026-04-05 05:06:51.802288 | orchestrator | 2026-04-05 05:06:51.802293 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 05:06:51.802299 | orchestrator | Sunday 05 April 2026 05:06:29 +0000 (0:00:00.175) 0:00:23.703 ********** 2026-04-05 05:06:51.802304 | orchestrator | 2026-04-05 05:06:51.802309 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 05:06:51.802314 | orchestrator | Sunday 05 April 2026 05:06:30 +0000 (0:00:00.147) 0:00:23.851 ********** 2026-04-05 05:06:51.802320 | orchestrator | 2026-04-05 05:06:51.802325 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-05 05:06:51.802331 | orchestrator | Sunday 05 April 2026 05:06:30 +0000 (0:00:00.168) 0:00:24.019 ********** 2026-04-05 05:06:51.802336 | orchestrator | 2026-04-05 05:06:51.802357 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-05 05:06:51.802362 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-05 05:06:51.802375 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-05 05:06:51.802386 | orchestrator | Sunday 05 April 2026 05:06:30 +0000 (0:00:00.159) 0:00:24.178 ********** 2026-04-05 05:06:51.802392 | orchestrator | changed: [testbed-node-4] 2026-04-05 05:06:51.802397 | orchestrator | changed: [testbed-node-5] 2026-04-05 05:06:51.802403 | orchestrator | changed: [testbed-node-3] 2026-04-05 05:06:51.802408 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:06:51.802413 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:06:51.802419 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:06:51.802424 | orchestrator | 2026-04-05 05:06:51.802429 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-05 05:06:51.802440 | orchestrator | Sunday 05 April 2026 05:06:41 +0000 (0:00:10.924) 0:00:35.103 ********** 2026-04-05 05:06:51.802446 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:06:51.802454 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:06:51.802461 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:06:51.802467 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:06:51.802474 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:06:51.802480 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:06:51.802486 | orchestrator | 2026-04-05 05:06:51.802493 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-05 05:06:51.802499 | orchestrator | Sunday 05 April 2026 05:06:42 +0000 (0:00:01.285) 0:00:36.389 ********** 2026-04-05 05:06:51.802505 | orchestrator | changed: [testbed-node-5] 2026-04-05 05:06:51.802515 | orchestrator | changed: [testbed-node-3] 2026-04-05 05:07:05.904887 | orchestrator | changed: [testbed-node-4] 2026-04-05 05:07:05.905002 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:07:05.905018 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:07:05.905030 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:07:05.905041 | orchestrator | 2026-04-05 05:07:05.905054 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-05 05:07:05.905066 | orchestrator | Sunday 05 April 2026 05:06:52 +0000 (0:00:10.360) 0:00:46.749 ********** 2026-04-05 05:07:05.905077 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-05 05:07:05.905090 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-05 05:07:05.905101 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-05 05:07:05.905112 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-05 05:07:05.905122 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-05 05:07:05.905133 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-05 05:07:05.905144 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-05 05:07:05.905155 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-05 05:07:05.905166 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-05 05:07:05.905176 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-05 05:07:05.905187 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-05 05:07:05.905198 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-05 05:07:05.905209 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 05:07:05.905245 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 05:07:05.905256 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 05:07:05.905267 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 05:07:05.905278 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 05:07:05.905289 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-05 05:07:05.905300 | orchestrator | 2026-04-05 05:07:05.905311 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-05 05:07:05.905321 | orchestrator | Sunday 05 April 2026 05:06:59 +0000 (0:00:06.553) 0:00:53.303 ********** 2026-04-05 05:07:05.905333 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-05 05:07:05.905343 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:07:05.905354 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-05 05:07:05.905365 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:07:05.905375 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-05 05:07:05.905386 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:07:05.905396 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-04-05 05:07:05.905407 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-04-05 05:07:05.905445 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-04-05 05:07:05.905460 | orchestrator | 2026-04-05 05:07:05.905475 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-05 05:07:05.905488 | orchestrator | Sunday 05 April 2026 05:07:01 +0000 (0:00:02.300) 0:00:55.604 ********** 2026-04-05 05:07:05.905501 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-05 05:07:05.905514 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:07:05.905526 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-05 05:07:05.905539 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:07:05.905552 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-05 05:07:05.905564 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:07:05.905577 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-05 05:07:05.905590 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-05 05:07:05.905618 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-05 05:07:05.905632 | orchestrator | 2026-04-05 05:07:05.905645 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 05:07:05.905660 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 05:07:05.905675 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 05:07:05.905704 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 05:07:05.905716 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 05:07:05.905727 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 05:07:05.905738 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 05:07:05.905748 | orchestrator | 2026-04-05 05:07:05.905759 | orchestrator | 2026-04-05 05:07:05.905770 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 05:07:05.905790 | orchestrator | Sunday 05 April 2026 05:07:05 +0000 (0:00:03.641) 0:00:59.246 ********** 2026-04-05 05:07:05.905801 | orchestrator | =============================================================================== 2026-04-05 05:07:05.905812 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.93s 2026-04-05 05:07:05.905822 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.36s 2026-04-05 05:07:05.905833 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.55s 2026-04-05 05:07:05.905843 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.64s 2026-04-05 05:07:05.905854 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.46s 2026-04-05 05:07:05.905864 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.30s 2026-04-05 05:07:05.905875 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.21s 2026-04-05 05:07:05.905886 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.15s 2026-04-05 05:07:05.905896 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.98s 2026-04-05 05:07:05.905906 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.83s 2026-04-05 05:07:05.905917 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.75s 2026-04-05 05:07:05.905927 | orchestrator | module-load : Load modules ---------------------------------------------- 1.66s 2026-04-05 05:07:05.905938 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.62s 2026-04-05 05:07:05.905949 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.54s 2026-04-05 05:07:05.905959 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.29s 2026-04-05 05:07:05.905970 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.24s 2026-04-05 05:07:05.905981 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.21s 2026-04-05 05:07:05.905991 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.18s 2026-04-05 05:07:05.906001 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.15s 2026-04-05 05:07:05.906069 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.68s 2026-04-05 05:07:06.097067 | orchestrator | + osism apply -a upgrade ovn 2026-04-05 05:07:07.399536 | orchestrator | 2026-04-05 05:07:07 | INFO  | Prepare task for execution of ovn. 2026-04-05 05:07:07.477034 | orchestrator | 2026-04-05 05:07:07 | INFO  | Task 5bda0089-e810-4405-bf64-a00b06225e35 (ovn) was prepared for execution. 2026-04-05 05:07:07.477158 | orchestrator | 2026-04-05 05:07:07 | INFO  | It takes a moment until task 5bda0089-e810-4405-bf64-a00b06225e35 (ovn) has been started and output is visible here. 2026-04-05 05:07:28.812098 | orchestrator | 2026-04-05 05:07:28.812219 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 05:07:28.812238 | orchestrator | 2026-04-05 05:07:28.812261 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 05:07:28.812282 | orchestrator | Sunday 05 April 2026 05:07:12 +0000 (0:00:01.448) 0:00:01.448 ********** 2026-04-05 05:07:28.812303 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:07:28.812324 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:07:28.812345 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:07:28.812364 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:07:28.812383 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:07:28.812403 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:07:28.812423 | orchestrator | 2026-04-05 05:07:28.812444 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 05:07:28.812465 | orchestrator | Sunday 05 April 2026 05:07:15 +0000 (0:00:03.228) 0:00:04.676 ********** 2026-04-05 05:07:28.812486 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-05 05:07:28.812570 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-05 05:07:28.812609 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-05 05:07:28.812627 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-05 05:07:28.812644 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-05 05:07:28.812662 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-05 05:07:28.812680 | orchestrator | 2026-04-05 05:07:28.812698 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-05 05:07:28.812716 | orchestrator | 2026-04-05 05:07:28.812734 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-05 05:07:28.812751 | orchestrator | Sunday 05 April 2026 05:07:19 +0000 (0:00:04.202) 0:00:08.879 ********** 2026-04-05 05:07:28.812770 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 05:07:28.812789 | orchestrator | 2026-04-05 05:07:28.812808 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-05 05:07:28.812825 | orchestrator | Sunday 05 April 2026 05:07:22 +0000 (0:00:02.605) 0:00:11.484 ********** 2026-04-05 05:07:28.812845 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.812866 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.812884 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.812903 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.812922 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.812970 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813005 | orchestrator | 2026-04-05 05:07:28.813024 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-05 05:07:28.813042 | orchestrator | Sunday 05 April 2026 05:07:25 +0000 (0:00:03.002) 0:00:14.487 ********** 2026-04-05 05:07:28.813061 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813109 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813127 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813144 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813163 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813181 | orchestrator | 2026-04-05 05:07:28.813198 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-05 05:07:28.813215 | orchestrator | Sunday 05 April 2026 05:07:28 +0000 (0:00:02.880) 0:00:17.368 ********** 2026-04-05 05:07:28.813233 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813251 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:28.813295 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278503 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278700 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278719 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278731 | orchestrator | 2026-04-05 05:07:38.278745 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-05 05:07:38.278758 | orchestrator | Sunday 05 April 2026 05:07:30 +0000 (0:00:02.475) 0:00:19.843 ********** 2026-04-05 05:07:38.278770 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278781 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278793 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278804 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278840 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278873 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278885 | orchestrator | 2026-04-05 05:07:38.278897 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-05 05:07:38.278907 | orchestrator | Sunday 05 April 2026 05:07:33 +0000 (0:00:03.096) 0:00:22.939 ********** 2026-04-05 05:07:38.278921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.278970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.279030 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:07:38.279056 | orchestrator | 2026-04-05 05:07:38.279071 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-05 05:07:38.279085 | orchestrator | Sunday 05 April 2026 05:07:36 +0000 (0:00:02.574) 0:00:25.514 ********** 2026-04-05 05:07:38.279099 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 05:07:38.279114 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:07:38.279126 | orchestrator | } 2026-04-05 05:07:38.279140 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 05:07:38.279153 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:07:38.279166 | orchestrator | } 2026-04-05 05:07:38.279179 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 05:07:38.279192 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:07:38.279204 | orchestrator | } 2026-04-05 05:07:38.279218 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 05:07:38.279231 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:07:38.279244 | orchestrator | } 2026-04-05 05:07:38.279256 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 05:07:38.279269 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:07:38.279283 | orchestrator | } 2026-04-05 05:07:38.279295 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 05:07:38.279308 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:07:38.279320 | orchestrator | } 2026-04-05 05:07:38.279334 | orchestrator | 2026-04-05 05:07:38.279348 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 05:07:38.279361 | orchestrator | Sunday 05 April 2026 05:07:38 +0000 (0:00:01.809) 0:00:27.323 ********** 2026-04-05 05:07:38.279381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:08:01.251940 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:08:01.252104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:08:01.252125 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:08:01.252136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:08:01.252147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:08:01.252157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:08:01.252167 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:08:01.252177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:08:01.252212 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:08:01.252223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:08:01.252233 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:08:01.252243 | orchestrator | 2026-04-05 05:08:01.252254 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-05 05:08:01.252266 | orchestrator | Sunday 05 April 2026 05:07:40 +0000 (0:00:02.584) 0:00:29.908 ********** 2026-04-05 05:08:01.252275 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:08:01.252287 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:08:01.252296 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:08:01.252305 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:08:01.252315 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:08:01.252324 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:08:01.252334 | orchestrator | 2026-04-05 05:08:01.252347 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-05 05:08:01.252359 | orchestrator | Sunday 05 April 2026 05:07:44 +0000 (0:00:03.628) 0:00:33.537 ********** 2026-04-05 05:08:01.252370 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-05 05:08:01.252383 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-05 05:08:01.252395 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-05 05:08:01.252406 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-05 05:08:01.252417 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-05 05:08:01.252428 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-05 05:08:01.252440 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 05:08:01.252451 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 05:08:01.252463 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 05:08:01.252475 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 05:08:01.252492 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 05:08:01.252534 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-05 05:08:01.252552 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 05:08:01.252579 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 05:08:01.252597 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 05:08:01.252614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 05:08:01.252632 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 05:08:01.252662 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-05 05:08:01.252679 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 05:08:01.252692 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 05:08:01.252728 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 05:08:01.252739 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 05:08:01.252749 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 05:08:01.252758 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-05 05:08:01.252768 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 05:08:01.252777 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 05:08:01.252787 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 05:08:01.252796 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 05:08:01.252805 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 05:08:01.252827 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-05 05:08:01.252837 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 05:08:01.252846 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 05:08:01.252855 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 05:08:01.252865 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 05:08:01.252874 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 05:08:01.252884 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-05 05:08:01.252893 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 05:08:01.252903 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 05:08:01.252913 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 05:08:01.252923 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-05 05:08:01.252933 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 05:08:01.252942 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-05 05:08:01.252953 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-05 05:08:01.252965 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-05 05:08:01.252974 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-05 05:08:01.252984 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-05 05:08:01.253001 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-05 05:08:01.253019 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-05 05:10:54.098495 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 05:10:54.098609 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 05:10:54.098636 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 05:10:54.098658 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-05 05:10:54.098676 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 05:10:54.098696 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-05 05:10:54.098713 | orchestrator | 2026-04-05 05:10:54.098734 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 05:10:54.098753 | orchestrator | Sunday 05 April 2026 05:08:04 +0000 (0:00:20.136) 0:00:53.673 ********** 2026-04-05 05:10:54.098772 | orchestrator | 2026-04-05 05:10:54.098784 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 05:10:54.098808 | orchestrator | Sunday 05 April 2026 05:08:04 +0000 (0:00:00.449) 0:00:54.122 ********** 2026-04-05 05:10:54.098820 | orchestrator | 2026-04-05 05:10:54.098831 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 05:10:54.098842 | orchestrator | Sunday 05 April 2026 05:08:05 +0000 (0:00:00.454) 0:00:54.577 ********** 2026-04-05 05:10:54.098853 | orchestrator | 2026-04-05 05:10:54.098863 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 05:10:54.098874 | orchestrator | Sunday 05 April 2026 05:08:06 +0000 (0:00:00.615) 0:00:55.192 ********** 2026-04-05 05:10:54.098885 | orchestrator | 2026-04-05 05:10:54.098896 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 05:10:54.098907 | orchestrator | Sunday 05 April 2026 05:08:06 +0000 (0:00:00.442) 0:00:55.634 ********** 2026-04-05 05:10:54.098918 | orchestrator | 2026-04-05 05:10:54.098929 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-05 05:10:54.098941 | orchestrator | Sunday 05 April 2026 05:08:06 +0000 (0:00:00.437) 0:00:56.072 ********** 2026-04-05 05:10:54.098951 | orchestrator | 2026-04-05 05:10:54.098962 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-05 05:10:54.098973 | orchestrator | Sunday 05 April 2026 05:08:07 +0000 (0:00:00.783) 0:00:56.856 ********** 2026-04-05 05:10:54.098984 | orchestrator | 2026-04-05 05:10:54.098995 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-04-05 05:10:54.099006 | orchestrator | changed: [testbed-node-3] 2026-04-05 05:10:54.099018 | orchestrator | changed: [testbed-node-4] 2026-04-05 05:10:54.099029 | orchestrator | changed: [testbed-node-5] 2026-04-05 05:10:54.099039 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:10:54.099050 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:10:54.099061 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:10:54.099072 | orchestrator | 2026-04-05 05:10:54.099083 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-05 05:10:54.099094 | orchestrator | 2026-04-05 05:10:54.099105 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 05:10:54.099116 | orchestrator | Sunday 05 April 2026 05:10:19 +0000 (0:02:12.146) 0:03:09.003 ********** 2026-04-05 05:10:54.099127 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:10:54.099138 | orchestrator | 2026-04-05 05:10:54.099172 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 05:10:54.099183 | orchestrator | Sunday 05 April 2026 05:10:21 +0000 (0:00:01.772) 0:03:10.776 ********** 2026-04-05 05:10:54.099194 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 05:10:54.099206 | orchestrator | 2026-04-05 05:10:54.099217 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-05 05:10:54.099228 | orchestrator | Sunday 05 April 2026 05:10:23 +0000 (0:00:01.957) 0:03:12.733 ********** 2026-04-05 05:10:54.099239 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099250 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099261 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099272 | orchestrator | 2026-04-05 05:10:54.099283 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-05 05:10:54.099294 | orchestrator | Sunday 05 April 2026 05:10:25 +0000 (0:00:01.810) 0:03:14.543 ********** 2026-04-05 05:10:54.099305 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099315 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099326 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099337 | orchestrator | 2026-04-05 05:10:54.099348 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-05 05:10:54.099359 | orchestrator | Sunday 05 April 2026 05:10:26 +0000 (0:00:01.379) 0:03:15.923 ********** 2026-04-05 05:10:54.099370 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099380 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099391 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099402 | orchestrator | 2026-04-05 05:10:54.099413 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-05 05:10:54.099424 | orchestrator | Sunday 05 April 2026 05:10:28 +0000 (0:00:01.378) 0:03:17.301 ********** 2026-04-05 05:10:54.099435 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099466 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099477 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099488 | orchestrator | 2026-04-05 05:10:54.099499 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-05 05:10:54.099510 | orchestrator | Sunday 05 April 2026 05:10:29 +0000 (0:00:01.434) 0:03:18.735 ********** 2026-04-05 05:10:54.099520 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099547 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099559 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099570 | orchestrator | 2026-04-05 05:10:54.099587 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-05 05:10:54.099599 | orchestrator | Sunday 05 April 2026 05:10:31 +0000 (0:00:01.408) 0:03:20.143 ********** 2026-04-05 05:10:54.099610 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:10:54.099621 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:10:54.099631 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:10:54.099642 | orchestrator | 2026-04-05 05:10:54.099653 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-05 05:10:54.099664 | orchestrator | Sunday 05 April 2026 05:10:32 +0000 (0:00:01.556) 0:03:21.700 ********** 2026-04-05 05:10:54.099675 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099686 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099697 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099707 | orchestrator | 2026-04-05 05:10:54.099718 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-05 05:10:54.099729 | orchestrator | Sunday 05 April 2026 05:10:34 +0000 (0:00:01.764) 0:03:23.464 ********** 2026-04-05 05:10:54.099740 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099751 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099761 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099772 | orchestrator | 2026-04-05 05:10:54.099783 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-05 05:10:54.099794 | orchestrator | Sunday 05 April 2026 05:10:35 +0000 (0:00:01.400) 0:03:24.865 ********** 2026-04-05 05:10:54.099812 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099823 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099834 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099844 | orchestrator | 2026-04-05 05:10:54.099855 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-05 05:10:54.099866 | orchestrator | Sunday 05 April 2026 05:10:37 +0000 (0:00:01.821) 0:03:26.687 ********** 2026-04-05 05:10:54.099876 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.099887 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.099898 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.099909 | orchestrator | 2026-04-05 05:10:54.099920 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-05 05:10:54.099931 | orchestrator | Sunday 05 April 2026 05:10:39 +0000 (0:00:01.652) 0:03:28.340 ********** 2026-04-05 05:10:54.099941 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:10:54.099953 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:10:54.099963 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:10:54.099974 | orchestrator | 2026-04-05 05:10:54.099985 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-05 05:10:54.099996 | orchestrator | Sunday 05 April 2026 05:10:40 +0000 (0:00:01.340) 0:03:29.681 ********** 2026-04-05 05:10:54.100007 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:10:54.100018 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:10:54.100028 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:10:54.100039 | orchestrator | 2026-04-05 05:10:54.100050 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-05 05:10:54.100060 | orchestrator | Sunday 05 April 2026 05:10:41 +0000 (0:00:01.357) 0:03:31.038 ********** 2026-04-05 05:10:54.100071 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.100082 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.100093 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.100103 | orchestrator | 2026-04-05 05:10:54.100114 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-05 05:10:54.100125 | orchestrator | Sunday 05 April 2026 05:10:44 +0000 (0:00:02.145) 0:03:33.184 ********** 2026-04-05 05:10:54.100135 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.100146 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.100157 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.100167 | orchestrator | 2026-04-05 05:10:54.100178 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-05 05:10:54.100189 | orchestrator | Sunday 05 April 2026 05:10:45 +0000 (0:00:01.517) 0:03:34.701 ********** 2026-04-05 05:10:54.100199 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.100210 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.100221 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.100231 | orchestrator | 2026-04-05 05:10:54.100242 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-05 05:10:54.100253 | orchestrator | Sunday 05 April 2026 05:10:47 +0000 (0:00:01.864) 0:03:36.565 ********** 2026-04-05 05:10:54.100263 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:10:54.100274 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:10:54.100284 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:10:54.100295 | orchestrator | 2026-04-05 05:10:54.100306 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-05 05:10:54.100317 | orchestrator | Sunday 05 April 2026 05:10:48 +0000 (0:00:01.421) 0:03:37.987 ********** 2026-04-05 05:10:54.100327 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:10:54.100338 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:10:54.100349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:10:54.100360 | orchestrator | 2026-04-05 05:10:54.100370 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-05 05:10:54.100381 | orchestrator | Sunday 05 April 2026 05:10:50 +0000 (0:00:01.638) 0:03:39.625 ********** 2026-04-05 05:10:54.100392 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:10:54.100402 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:10:54.100419 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:10:54.100430 | orchestrator | 2026-04-05 05:10:54.100455 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-05 05:10:54.100468 | orchestrator | Sunday 05 April 2026 05:10:52 +0000 (0:00:01.827) 0:03:41.453 ********** 2026-04-05 05:10:54.100494 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.249873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.249984 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250003 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250073 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250087 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:00.250183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:00.250208 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:00.250231 | orchestrator | 2026-04-05 05:11:00.250244 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-05 05:11:00.250256 | orchestrator | Sunday 05 April 2026 05:10:56 +0000 (0:00:03.878) 0:03:45.331 ********** 2026-04-05 05:11:00.250268 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250280 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250299 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250310 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:00.250335 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.339793 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.339886 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.339899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:15.339909 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.339938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:15.339947 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.339956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:15.339966 | orchestrator | 2026-04-05 05:11:15.339987 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-05 05:11:15.339998 | orchestrator | Sunday 05 April 2026 05:11:02 +0000 (0:00:06.295) 0:03:51.627 ********** 2026-04-05 05:11:15.340007 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-05 05:11:15.340017 | orchestrator | 2026-04-05 05:11:15.340025 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-05 05:11:15.340034 | orchestrator | Sunday 05 April 2026 05:11:04 +0000 (0:00:02.032) 0:03:53.659 ********** 2026-04-05 05:11:15.340043 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:11:15.340053 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:11:15.340073 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:11:15.340082 | orchestrator | 2026-04-05 05:11:15.340091 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-05 05:11:15.340100 | orchestrator | Sunday 05 April 2026 05:11:06 +0000 (0:00:01.837) 0:03:55.497 ********** 2026-04-05 05:11:15.340109 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:11:15.340117 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:11:15.340126 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:11:15.340134 | orchestrator | 2026-04-05 05:11:15.340143 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-05 05:11:15.340151 | orchestrator | Sunday 05 April 2026 05:11:09 +0000 (0:00:03.034) 0:03:58.531 ********** 2026-04-05 05:11:15.340160 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:11:15.340168 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:11:15.340177 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:11:15.340185 | orchestrator | 2026-04-05 05:11:15.340194 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-05 05:11:15.340203 | orchestrator | Sunday 05 April 2026 05:11:12 +0000 (0:00:02.661) 0:04:01.193 ********** 2026-04-05 05:11:15.340212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.340228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.340237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.340246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.340255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.340269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:15.340284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:20.870422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:20.870607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:11:20.870632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870644 | orchestrator | 2026-04-05 05:11:20.870657 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-05 05:11:20.870669 | orchestrator | Sunday 05 April 2026 05:11:17 +0000 (0:00:05.023) 0:04:06.217 ********** 2026-04-05 05:11:20.870682 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 05:11:20.870695 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:11:20.870706 | orchestrator | } 2026-04-05 05:11:20.870718 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 05:11:20.870729 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:11:20.870739 | orchestrator | } 2026-04-05 05:11:20.870750 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 05:11:20.870761 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:11:20.870772 | orchestrator | } 2026-04-05 05:11:20.870783 | orchestrator | 2026-04-05 05:11:20.870794 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 05:11:20.870820 | orchestrator | Sunday 05 April 2026 05:11:18 +0000 (0:00:01.438) 0:04:07.656 ********** 2026-04-05 05:11:20.870832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 05:11:20.870989 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 05:13:15.261981 | orchestrator | 2026-04-05 05:13:15.262094 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-05 05:13:15.262102 | orchestrator | Sunday 05 April 2026 05:11:22 +0000 (0:00:03.522) 0:04:11.178 ********** 2026-04-05 05:13:15.262107 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-05 05:13:15.262113 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-05 05:13:15.262117 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-05 05:13:15.262121 | orchestrator | 2026-04-05 05:13:15.262125 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-05 05:13:15.262130 | orchestrator | Sunday 05 April 2026 05:11:45 +0000 (0:00:22.978) 0:04:34.157 ********** 2026-04-05 05:13:15.262134 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 05:13:15.262138 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:13:15.262142 | orchestrator | } 2026-04-05 05:13:15.262146 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 05:13:15.262150 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:13:15.262153 | orchestrator | } 2026-04-05 05:13:15.262157 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 05:13:15.262161 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 05:13:15.262165 | orchestrator | } 2026-04-05 05:13:15.262169 | orchestrator | 2026-04-05 05:13:15.262172 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 05:13:15.262176 | orchestrator | Sunday 05 April 2026 05:11:46 +0000 (0:00:01.369) 0:04:35.527 ********** 2026-04-05 05:13:15.262180 | orchestrator | 2026-04-05 05:13:15.262184 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 05:13:15.262187 | orchestrator | Sunday 05 April 2026 05:11:46 +0000 (0:00:00.485) 0:04:36.012 ********** 2026-04-05 05:13:15.262191 | orchestrator | 2026-04-05 05:13:15.262195 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-05 05:13:15.262199 | orchestrator | Sunday 05 April 2026 05:11:47 +0000 (0:00:00.451) 0:04:36.463 ********** 2026-04-05 05:13:15.262202 | orchestrator | 2026-04-05 05:13:15.262206 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-05 05:13:15.262210 | orchestrator | Sunday 05 April 2026 05:11:48 +0000 (0:00:00.794) 0:04:37.258 ********** 2026-04-05 05:13:15.262214 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:13:15.262218 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:13:15.262222 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:13:15.262226 | orchestrator | 2026-04-05 05:13:15.262229 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-05 05:13:15.262233 | orchestrator | Sunday 05 April 2026 05:12:04 +0000 (0:00:16.694) 0:04:53.952 ********** 2026-04-05 05:13:15.262245 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:13:15.262249 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:13:15.262253 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:13:15.262256 | orchestrator | 2026-04-05 05:13:15.262260 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-05 05:13:15.262264 | orchestrator | Sunday 05 April 2026 05:12:21 +0000 (0:00:16.807) 0:05:10.760 ********** 2026-04-05 05:13:15.262268 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-05 05:13:15.262294 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-05 05:13:15.262298 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-05 05:13:15.262302 | orchestrator | 2026-04-05 05:13:15.262306 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-05 05:13:15.262310 | orchestrator | Sunday 05 April 2026 05:12:37 +0000 (0:00:16.138) 0:05:26.898 ********** 2026-04-05 05:13:15.262313 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:13:15.262317 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:13:15.262321 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:13:15.262325 | orchestrator | 2026-04-05 05:13:15.262328 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-05 05:13:15.262332 | orchestrator | Sunday 05 April 2026 05:12:54 +0000 (0:00:16.509) 0:05:43.407 ********** 2026-04-05 05:13:15.262336 | orchestrator | Pausing for 5 seconds 2026-04-05 05:13:15.262349 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:13:15.262354 | orchestrator | 2026-04-05 05:13:15.262357 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-05 05:13:15.262361 | orchestrator | Sunday 05 April 2026 05:13:00 +0000 (0:00:06.247) 0:05:49.654 ********** 2026-04-05 05:13:15.262365 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:13:15.262368 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:13:15.262372 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:13:15.262376 | orchestrator | 2026-04-05 05:13:15.262380 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-05 05:13:15.262383 | orchestrator | Sunday 05 April 2026 05:13:02 +0000 (0:00:01.821) 0:05:51.476 ********** 2026-04-05 05:13:15.262387 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:13:15.262391 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:13:15.262394 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:13:15.262398 | orchestrator | 2026-04-05 05:13:15.262402 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-05 05:13:15.262406 | orchestrator | Sunday 05 April 2026 05:13:04 +0000 (0:00:01.988) 0:05:53.465 ********** 2026-04-05 05:13:15.262409 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:13:15.262413 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:13:15.262417 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:13:15.262421 | orchestrator | 2026-04-05 05:13:15.262424 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-05 05:13:15.262428 | orchestrator | Sunday 05 April 2026 05:13:06 +0000 (0:00:01.938) 0:05:55.404 ********** 2026-04-05 05:13:15.262432 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:13:15.262436 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:13:15.262439 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:13:15.262443 | orchestrator | 2026-04-05 05:13:15.262447 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-05 05:13:15.262451 | orchestrator | Sunday 05 April 2026 05:13:08 +0000 (0:00:01.883) 0:05:57.287 ********** 2026-04-05 05:13:15.262454 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:13:15.262458 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:13:15.262462 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:13:15.262466 | orchestrator | 2026-04-05 05:13:15.262469 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-05 05:13:15.262484 | orchestrator | Sunday 05 April 2026 05:13:10 +0000 (0:00:02.015) 0:05:59.302 ********** 2026-04-05 05:13:15.262488 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:13:15.262491 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:13:15.262495 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:13:15.262499 | orchestrator | 2026-04-05 05:13:15.262502 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-05 05:13:15.262506 | orchestrator | Sunday 05 April 2026 05:13:12 +0000 (0:00:02.089) 0:06:01.392 ********** 2026-04-05 05:13:15.262510 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-05 05:13:15.262514 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-05 05:13:15.262517 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-05 05:13:15.262521 | orchestrator | 2026-04-05 05:13:15.262530 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 05:13:15.262535 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 05:13:15.262541 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 05:13:15.262545 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 05:13:15.262550 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 05:13:15.262554 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 05:13:15.262558 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 05:13:15.262563 | orchestrator | 2026-04-05 05:13:15.262567 | orchestrator | 2026-04-05 05:13:15.262572 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 05:13:15.262576 | orchestrator | Sunday 05 April 2026 05:13:14 +0000 (0:00:02.564) 0:06:03.956 ********** 2026-04-05 05:13:15.262581 | orchestrator | =============================================================================== 2026-04-05 05:13:15.262585 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.15s 2026-04-05 05:13:15.262589 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 22.98s 2026-04-05 05:13:15.262594 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.14s 2026-04-05 05:13:15.262598 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.81s 2026-04-05 05:13:15.262602 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.69s 2026-04-05 05:13:15.262607 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.51s 2026-04-05 05:13:15.262611 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.14s 2026-04-05 05:13:15.262615 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.30s 2026-04-05 05:13:15.262620 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.25s 2026-04-05 05:13:15.262624 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.02s 2026-04-05 05:13:15.262628 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.20s 2026-04-05 05:13:15.262636 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.88s 2026-04-05 05:13:15.262640 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.63s 2026-04-05 05:13:15.262644 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.52s 2026-04-05 05:13:15.262649 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.23s 2026-04-05 05:13:15.262653 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.18s 2026-04-05 05:13:15.262658 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.10s 2026-04-05 05:13:15.262662 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 3.03s 2026-04-05 05:13:15.262666 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 3.00s 2026-04-05 05:13:15.262670 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.88s 2026-04-05 05:13:15.466887 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-05 05:13:15.467031 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 05:13:15.467049 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-04-05 05:13:15.475242 | orchestrator | + set -e 2026-04-05 05:13:15.475354 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 05:13:15.475379 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 05:13:15.475400 | orchestrator | ++ INTERACTIVE=false 2026-04-05 05:13:15.475416 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 05:13:15.475427 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 05:13:15.475439 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-04-05 05:13:16.830194 | orchestrator | 2026-04-05 05:13:16 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-05 05:13:16.897410 | orchestrator | 2026-04-05 05:13:16 | INFO  | Task 544e34f1-3650-4872-9661-6a6453582e0c (ceph-rolling_update) was prepared for execution. 2026-04-05 05:13:16.897499 | orchestrator | 2026-04-05 05:13:16 | INFO  | It takes a moment until task 544e34f1-3650-4872-9661-6a6453582e0c (ceph-rolling_update) has been started and output is visible here. 2026-04-05 05:14:40.866604 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-05 05:14:40.866751 | orchestrator | 2.16.14 2026-04-05 05:14:40.866779 | orchestrator | 2026-04-05 05:14:40.866801 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-05 05:14:40.866820 | orchestrator | 2026-04-05 05:14:40.866839 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-05 05:14:40.866858 | orchestrator | Sunday 05 April 2026 05:13:25 +0000 (0:00:01.866) 0:00:01.866 ********** 2026-04-05 05:14:40.866877 | orchestrator | skipping: [localhost] 2026-04-05 05:14:40.866894 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-05 05:14:40.866915 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-05 05:14:40.866934 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-05 05:14:40.866953 | orchestrator | 2026-04-05 05:14:40.866970 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-05 05:14:40.866990 | orchestrator | 2026-04-05 05:14:40.867008 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-05 05:14:40.867027 | orchestrator | Sunday 05 April 2026 05:13:27 +0000 (0:00:02.057) 0:00:03.923 ********** 2026-04-05 05:14:40.867038 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 05:14:40.867049 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-05 05:14:40.867060 | orchestrator | } 2026-04-05 05:14:40.867071 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 05:14:40.867082 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-05 05:14:40.867093 | orchestrator | } 2026-04-05 05:14:40.867104 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 05:14:40.867115 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-05 05:14:40.867126 | orchestrator | } 2026-04-05 05:14:40.867136 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 05:14:40.867147 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-05 05:14:40.867158 | orchestrator | } 2026-04-05 05:14:40.867168 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 05:14:40.867179 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-05 05:14:40.867190 | orchestrator | } 2026-04-05 05:14:40.867233 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 05:14:40.867245 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-05 05:14:40.867256 | orchestrator | } 2026-04-05 05:14:40.867266 | orchestrator | ok: [testbed-manager] => { 2026-04-05 05:14:40.867277 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-05 05:14:40.867288 | orchestrator | } 2026-04-05 05:14:40.867299 | orchestrator | 2026-04-05 05:14:40.867309 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-05 05:14:40.867320 | orchestrator | Sunday 05 April 2026 05:13:34 +0000 (0:00:06.922) 0:00:10.845 ********** 2026-04-05 05:14:40.867331 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:14:40.867373 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:14:40.867384 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:14:40.867395 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:14:40.867405 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:14:40.867416 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:14:40.867427 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.867436 | orchestrator | 2026-04-05 05:14:40.867446 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-05 05:14:40.867455 | orchestrator | Sunday 05 April 2026 05:13:40 +0000 (0:00:06.180) 0:00:17.025 ********** 2026-04-05 05:14:40.867465 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:14:40.867475 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:14:40.867500 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:14:40.867510 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:14:40.867519 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:14:40.867529 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:14:40.867539 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:14:40.867548 | orchestrator | 2026-04-05 05:14:40.867558 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-05 05:14:40.867567 | orchestrator | Sunday 05 April 2026 05:14:10 +0000 (0:00:30.647) 0:00:47.673 ********** 2026-04-05 05:14:40.867577 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.867586 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.867596 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.867605 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.867615 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.867624 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.867633 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.867643 | orchestrator | 2026-04-05 05:14:40.867653 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:14:40.867662 | orchestrator | Sunday 05 April 2026 05:14:13 +0000 (0:00:02.142) 0:00:49.815 ********** 2026-04-05 05:14:40.867673 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-05 05:14:40.867684 | orchestrator | 2026-04-05 05:14:40.867694 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:14:40.867703 | orchestrator | Sunday 05 April 2026 05:14:15 +0000 (0:00:02.673) 0:00:52.489 ********** 2026-04-05 05:14:40.867713 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.867725 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.867741 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.867757 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.867773 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.867788 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.867804 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.867820 | orchestrator | 2026-04-05 05:14:40.867862 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:14:40.867880 | orchestrator | Sunday 05 April 2026 05:14:18 +0000 (0:00:02.561) 0:00:55.051 ********** 2026-04-05 05:14:40.867896 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.867910 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.867926 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.867940 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.867955 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.867971 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.867987 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.868003 | orchestrator | 2026-04-05 05:14:40.868020 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:14:40.868036 | orchestrator | Sunday 05 April 2026 05:14:20 +0000 (0:00:01.944) 0:00:56.996 ********** 2026-04-05 05:14:40.868067 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.868085 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.868100 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.868116 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.868132 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.868179 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.868196 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.868256 | orchestrator | 2026-04-05 05:14:40.868272 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:14:40.868287 | orchestrator | Sunday 05 April 2026 05:14:22 +0000 (0:00:02.636) 0:00:59.632 ********** 2026-04-05 05:14:40.868302 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.868318 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.868333 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.868348 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.868364 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.868380 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.868395 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.868410 | orchestrator | 2026-04-05 05:14:40.868426 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:14:40.868440 | orchestrator | Sunday 05 April 2026 05:14:24 +0000 (0:00:02.060) 0:01:01.693 ********** 2026-04-05 05:14:40.868455 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.868469 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.868485 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.868499 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.868513 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.868530 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.868545 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.868562 | orchestrator | 2026-04-05 05:14:40.868579 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:14:40.868597 | orchestrator | Sunday 05 April 2026 05:14:27 +0000 (0:00:02.335) 0:01:04.028 ********** 2026-04-05 05:14:40.868614 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.868631 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.868647 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.868663 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.868679 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.868694 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.868711 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.868727 | orchestrator | 2026-04-05 05:14:40.868744 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:14:40.868762 | orchestrator | Sunday 05 April 2026 05:14:29 +0000 (0:00:01.904) 0:01:05.933 ********** 2026-04-05 05:14:40.868779 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:14:40.868796 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:14:40.868854 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:14:40.868873 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:14:40.868890 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:14:40.868905 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:14:40.868918 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:14:40.868933 | orchestrator | 2026-04-05 05:14:40.868948 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:14:40.868962 | orchestrator | Sunday 05 April 2026 05:14:31 +0000 (0:00:02.303) 0:01:08.236 ********** 2026-04-05 05:14:40.868976 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.868992 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.869023 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.869040 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.869057 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.869074 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.869089 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.869105 | orchestrator | 2026-04-05 05:14:40.869120 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:14:40.869152 | orchestrator | Sunday 05 April 2026 05:14:33 +0000 (0:00:01.890) 0:01:10.127 ********** 2026-04-05 05:14:40.869170 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:14:40.869187 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:14:40.869260 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:14:40.869280 | orchestrator | 2026-04-05 05:14:40.869297 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:14:40.869312 | orchestrator | Sunday 05 April 2026 05:14:35 +0000 (0:00:01.968) 0:01:12.095 ********** 2026-04-05 05:14:40.869329 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:14:40.869344 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:14:40.869360 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:14:40.869377 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:14:40.869394 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:14:40.869411 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:14:40.869428 | orchestrator | ok: [testbed-manager] 2026-04-05 05:14:40.869444 | orchestrator | 2026-04-05 05:14:40.869460 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:14:40.869475 | orchestrator | Sunday 05 April 2026 05:14:37 +0000 (0:00:02.106) 0:01:14.202 ********** 2026-04-05 05:14:40.869490 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:14:40.869506 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:14:40.869521 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:14:40.869565 | orchestrator | 2026-04-05 05:14:40.869580 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:14:40.869595 | orchestrator | Sunday 05 April 2026 05:14:40 +0000 (0:00:03.219) 0:01:17.422 ********** 2026-04-05 05:14:40.869634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:15:03.616289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:15:03.616390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:15:03.616402 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.616411 | orchestrator | 2026-04-05 05:15:03.616420 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:15:03.616429 | orchestrator | Sunday 05 April 2026 05:14:42 +0000 (0:00:01.429) 0:01:18.851 ********** 2026-04-05 05:15:03.616440 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:15:03.616450 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:15:03.616459 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:15:03.616467 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.616476 | orchestrator | 2026-04-05 05:15:03.616490 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:15:03.616505 | orchestrator | Sunday 05 April 2026 05:14:43 +0000 (0:00:01.859) 0:01:20.711 ********** 2026-04-05 05:15:03.616522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:03.616571 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:03.616590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:03.616619 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.616632 | orchestrator | 2026-04-05 05:15:03.616647 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:15:03.616662 | orchestrator | Sunday 05 April 2026 05:14:45 +0000 (0:00:01.165) 0:01:21.876 ********** 2026-04-05 05:15:03.616679 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b58ad7ef29db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:14:38.148299', 'end': '2026-04-05 05:14:38.190167', 'delta': '0:00:00.041868', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b58ad7ef29db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:15:03.616719 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '0027b45af4f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:14:38.670546', 'end': '2026-04-05 05:14:38.717292', 'delta': '0:00:00.046746', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0027b45af4f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:15:03.616735 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd0e8f8775caf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:14:39.474183', 'end': '2026-04-05 05:14:39.512711', 'delta': '0:00:00.038528', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0e8f8775caf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:15:03.616744 | orchestrator | 2026-04-05 05:15:03.616752 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:15:03.616760 | orchestrator | Sunday 05 April 2026 05:14:46 +0000 (0:00:01.168) 0:01:23.045 ********** 2026-04-05 05:15:03.616768 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:15:03.616777 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:15:03.616785 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:15:03.616792 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:03.616808 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:03.616816 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:03.616825 | orchestrator | ok: [testbed-manager] 2026-04-05 05:15:03.616835 | orchestrator | 2026-04-05 05:15:03.616844 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:15:03.616853 | orchestrator | Sunday 05 April 2026 05:14:48 +0000 (0:00:02.434) 0:01:25.480 ********** 2026-04-05 05:15:03.616863 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.616872 | orchestrator | 2026-04-05 05:15:03.616882 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:15:03.616891 | orchestrator | Sunday 05 April 2026 05:14:50 +0000 (0:00:01.294) 0:01:26.774 ********** 2026-04-05 05:15:03.616899 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:15:03.616906 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:15:03.616914 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:15:03.616922 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:03.616930 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:03.616938 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:03.616946 | orchestrator | ok: [testbed-manager] 2026-04-05 05:15:03.616953 | orchestrator | 2026-04-05 05:15:03.616961 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:15:03.616969 | orchestrator | Sunday 05 April 2026 05:14:52 +0000 (0:00:02.112) 0:01:28.887 ********** 2026-04-05 05:15:03.616977 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:15:03.616985 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:15:03.616993 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:15:03.617001 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:15:03.617008 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:15:03.617016 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-05 05:15:03.617024 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:15:03.617032 | orchestrator | 2026-04-05 05:15:03.617044 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:15:03.617054 | orchestrator | Sunday 05 April 2026 05:14:55 +0000 (0:00:03.504) 0:01:32.392 ********** 2026-04-05 05:15:03.617063 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:15:03.617073 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:15:03.617082 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:15:03.617092 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:03.617101 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:03.617110 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:03.617120 | orchestrator | ok: [testbed-manager] 2026-04-05 05:15:03.617129 | orchestrator | 2026-04-05 05:15:03.617139 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:15:03.617148 | orchestrator | Sunday 05 April 2026 05:14:57 +0000 (0:00:02.210) 0:01:34.602 ********** 2026-04-05 05:15:03.617158 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.617167 | orchestrator | 2026-04-05 05:15:03.617177 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:15:03.617186 | orchestrator | Sunday 05 April 2026 05:14:59 +0000 (0:00:01.130) 0:01:35.733 ********** 2026-04-05 05:15:03.617196 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.617205 | orchestrator | 2026-04-05 05:15:03.617215 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:15:03.617224 | orchestrator | Sunday 05 April 2026 05:15:00 +0000 (0:00:01.269) 0:01:37.003 ********** 2026-04-05 05:15:03.617234 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.617243 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:03.617253 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:03.617331 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:03.617345 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:03.617355 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:03.617364 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:03.617374 | orchestrator | 2026-04-05 05:15:03.617391 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:15:03.617401 | orchestrator | Sunday 05 April 2026 05:15:02 +0000 (0:00:02.516) 0:01:39.519 ********** 2026-04-05 05:15:03.617411 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:03.617420 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:03.617430 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:03.617440 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:03.617449 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:03.617459 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:03.617477 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:15.379511 | orchestrator | 2026-04-05 05:15:15.379626 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:15:15.379644 | orchestrator | Sunday 05 April 2026 05:15:05 +0000 (0:00:02.252) 0:01:41.772 ********** 2026-04-05 05:15:15.379656 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:15.379668 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:15.379679 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:15.379690 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:15.379701 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:15.379712 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:15.379723 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:15.379734 | orchestrator | 2026-04-05 05:15:15.379745 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:15:15.379756 | orchestrator | Sunday 05 April 2026 05:15:07 +0000 (0:00:02.235) 0:01:44.008 ********** 2026-04-05 05:15:15.379767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:15.379778 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:15.379789 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:15.379799 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:15.379810 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:15.379821 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:15.379832 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:15.379842 | orchestrator | 2026-04-05 05:15:15.379853 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:15:15.379864 | orchestrator | Sunday 05 April 2026 05:15:09 +0000 (0:00:01.882) 0:01:45.890 ********** 2026-04-05 05:15:15.379875 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:15.379886 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:15.379896 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:15.379907 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:15.379918 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:15.379929 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:15.379939 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:15.379950 | orchestrator | 2026-04-05 05:15:15.379961 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:15:15.379972 | orchestrator | Sunday 05 April 2026 05:15:11 +0000 (0:00:02.121) 0:01:48.012 ********** 2026-04-05 05:15:15.379983 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:15.379994 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:15.380005 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:15.380015 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:15.380026 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:15.380037 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:15.380050 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:15.380063 | orchestrator | 2026-04-05 05:15:15.380076 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:15:15.380089 | orchestrator | Sunday 05 April 2026 05:15:13 +0000 (0:00:01.862) 0:01:49.874 ********** 2026-04-05 05:15:15.380101 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:15.380113 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:15.380125 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:15.380138 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:15.380175 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:15.380189 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:15.380202 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:15.380213 | orchestrator | 2026-04-05 05:15:15.380224 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:15:15.380235 | orchestrator | Sunday 05 April 2026 05:15:15 +0000 (0:00:02.008) 0:01:51.882 ********** 2026-04-05 05:15:15.380262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.380276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.380287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.380366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:15:15.380381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.380393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.380404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.380426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:15.380450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.380470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:15:15.601407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57f1796b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:15.601505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601529 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:15.601546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.601581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:15:15.601600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e425300', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:15.820496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820520 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:15.820550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}})  2026-04-05 05:15:15.820586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:15.820603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}})  2026-04-05 05:15:15.820616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.820639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:15:15.820659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960585 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:15.960697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}})  2026-04-05 05:15:15.960793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}})  2026-04-05 05:15:15.960807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:15.960882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:15.960940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}})  2026-04-05 05:15:15.960963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:16.126607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}})  2026-04-05 05:15:16.126702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.126718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.126745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:15:16.126758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.126768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:15:16.126778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.126827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}})  2026-04-05 05:15:16.126840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}})  2026-04-05 05:15:16.126851 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:16.126891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.126913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:16.126939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.339995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}})  2026-04-05 05:15:16.340152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:16.340163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}})  2026-04-05 05:15:16.340195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:15:16.340243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.340279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}})  2026-04-05 05:15:16.340345 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:16.340360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}})  2026-04-05 05:15:16.340379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:16.462453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462517 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:16.462530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462560 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462572 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462584 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-48-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:15:16.462596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462618 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:16.462685 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c0f7189e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:15:18.102635 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:18.102763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:15:18.102823 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:18.102845 | orchestrator | 2026-04-05 05:15:18.102859 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:15:18.102871 | orchestrator | Sunday 05 April 2026 05:15:17 +0000 (0:00:02.428) 0:01:54.311 ********** 2026-04-05 05:15:18.102885 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.102924 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.102936 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.102949 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.102981 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.102993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.103010 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.103033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.103056 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274072 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274172 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274211 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274223 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274235 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274248 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274278 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274298 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274379 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:18.274397 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57f1796b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274411 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.274432 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501059 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501151 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501166 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501178 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501190 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501202 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501237 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501278 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e425300', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501346 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.501366 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:18.501393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638473 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638490 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:18.638497 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638570 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.638586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793622 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793654 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793667 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793713 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.793737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.982908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983004 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983033 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:18.983069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983102 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983115 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983172 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983191 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:18.983219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.086924 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087067 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087147 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087186 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087217 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087236 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:19.087256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087299 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087346 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.087376 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.146821 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.146922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.146954 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.146968 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.146982 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-48-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.147012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.147048 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.147066 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.147078 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:19.147099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717126 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717259 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717280 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c0f7189e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0f7189e-4b5b-4fcd-ab1c-32b10bef3794-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717333 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717398 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:15:30.717423 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:30.717437 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:30.717448 | orchestrator | 2026-04-05 05:15:30.717461 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:15:30.717473 | orchestrator | Sunday 05 April 2026 05:15:20 +0000 (0:00:02.807) 0:01:57.119 ********** 2026-04-05 05:15:30.717484 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:15:30.717496 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:15:30.717507 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:15:30.717518 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:30.717528 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:30.717546 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:30.717557 | orchestrator | ok: [testbed-manager] 2026-04-05 05:15:30.717568 | orchestrator | 2026-04-05 05:15:30.717579 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:15:30.717590 | orchestrator | Sunday 05 April 2026 05:15:22 +0000 (0:00:02.483) 0:01:59.603 ********** 2026-04-05 05:15:30.717601 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:15:30.717611 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:15:30.717622 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:15:30.717632 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:30.717643 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:30.717653 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:30.717666 | orchestrator | ok: [testbed-manager] 2026-04-05 05:15:30.717679 | orchestrator | 2026-04-05 05:15:30.717691 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:15:30.717704 | orchestrator | Sunday 05 April 2026 05:15:24 +0000 (0:00:01.929) 0:02:01.532 ********** 2026-04-05 05:15:30.717717 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:15:30.717729 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:15:30.717742 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:15:30.717754 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:30.717767 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:30.717779 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:30.717791 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:30.717804 | orchestrator | 2026-04-05 05:15:30.717817 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:15:30.717830 | orchestrator | Sunday 05 April 2026 05:15:27 +0000 (0:00:02.473) 0:02:04.005 ********** 2026-04-05 05:15:30.717843 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:30.717856 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:30.717869 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:30.717882 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:30.717894 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:30.717907 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:30.717920 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:30.717932 | orchestrator | 2026-04-05 05:15:30.717945 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:15:30.717959 | orchestrator | Sunday 05 April 2026 05:15:29 +0000 (0:00:02.204) 0:02:06.209 ********** 2026-04-05 05:15:30.717971 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:30.717984 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:30.717997 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:30.718010 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:30.718087 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:58.322301 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:58.322466 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-05 05:15:58.322486 | orchestrator | 2026-04-05 05:15:58.322499 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:15:58.322511 | orchestrator | Sunday 05 April 2026 05:15:32 +0000 (0:00:02.861) 0:02:09.071 ********** 2026-04-05 05:15:58.322523 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:58.322533 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:58.322544 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:58.322555 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.322565 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:58.322576 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:58.322587 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:58.322597 | orchestrator | 2026-04-05 05:15:58.322608 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:15:58.322619 | orchestrator | Sunday 05 April 2026 05:15:34 +0000 (0:00:01.985) 0:02:11.056 ********** 2026-04-05 05:15:58.322630 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:15:58.322641 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-05 05:15:58.322680 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 05:15:58.322706 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-05 05:15:58.322717 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:15:58.322728 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 05:15:58.322738 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-05 05:15:58.322749 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-05 05:15:58.322759 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 05:15:58.322770 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:15:58.322780 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 05:15:58.322791 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 05:15:58.322801 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-05 05:15:58.322812 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-05 05:15:58.322825 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 05:15:58.322838 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 05:15:58.322850 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 05:15:58.322863 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-05 05:15:58.322875 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 05:15:58.322902 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 05:15:58.322915 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 05:15:58.322928 | orchestrator | 2026-04-05 05:15:58.322942 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:15:58.322966 | orchestrator | Sunday 05 April 2026 05:15:37 +0000 (0:00:03.275) 0:02:14.331 ********** 2026-04-05 05:15:58.322979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:15:58.322991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:15:58.323003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:15:58.323016 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:58.323028 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:15:58.323041 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:15:58.323053 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:15:58.323066 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:58.323078 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:15:58.323091 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:15:58.323104 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:15:58.323116 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:58.323129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 05:15:58.323142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 05:15:58.323155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 05:15:58.323165 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.323176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 05:15:58.323186 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 05:15:58.323197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 05:15:58.323208 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:58.323300 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 05:15:58.323313 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 05:15:58.323324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 05:15:58.323335 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:58.323345 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-05 05:15:58.323357 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-05 05:15:58.323378 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-05 05:15:58.323389 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:58.323399 | orchestrator | 2026-04-05 05:15:58.323429 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:15:58.323442 | orchestrator | Sunday 05 April 2026 05:15:39 +0000 (0:00:02.076) 0:02:16.408 ********** 2026-04-05 05:15:58.323453 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:15:58.323464 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:15:58.323474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:15:58.323485 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:15:58.323515 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 05:15:58.323527 | orchestrator | 2026-04-05 05:15:58.323538 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:15:58.323550 | orchestrator | Sunday 05 April 2026 05:15:41 +0000 (0:00:02.161) 0:02:18.569 ********** 2026-04-05 05:15:58.323564 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.323583 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:58.323603 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:58.323614 | orchestrator | 2026-04-05 05:15:58.323625 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:15:58.323635 | orchestrator | Sunday 05 April 2026 05:15:43 +0000 (0:00:01.342) 0:02:19.912 ********** 2026-04-05 05:15:58.323646 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.323657 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:58.323667 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:58.323678 | orchestrator | 2026-04-05 05:15:58.323688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:15:58.323699 | orchestrator | Sunday 05 April 2026 05:15:44 +0000 (0:00:01.343) 0:02:21.256 ********** 2026-04-05 05:15:58.323710 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.323720 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:15:58.323731 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:15:58.323741 | orchestrator | 2026-04-05 05:15:58.323759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:15:58.323770 | orchestrator | Sunday 05 April 2026 05:15:46 +0000 (0:00:01.495) 0:02:22.751 ********** 2026-04-05 05:15:58.323781 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:58.323792 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:58.323803 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:58.323813 | orchestrator | 2026-04-05 05:15:58.323824 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:15:58.323835 | orchestrator | Sunday 05 April 2026 05:15:47 +0000 (0:00:01.422) 0:02:24.174 ********** 2026-04-05 05:15:58.323845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:15:58.323856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:15:58.323867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:15:58.323878 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.323888 | orchestrator | 2026-04-05 05:15:58.323899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:15:58.323910 | orchestrator | Sunday 05 April 2026 05:15:48 +0000 (0:00:01.371) 0:02:25.545 ********** 2026-04-05 05:15:58.323920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:15:58.323931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:15:58.323942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:15:58.323952 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.323963 | orchestrator | 2026-04-05 05:15:58.323973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:15:58.323991 | orchestrator | Sunday 05 April 2026 05:15:50 +0000 (0:00:01.680) 0:02:27.226 ********** 2026-04-05 05:15:58.324002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:15:58.324013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:15:58.324023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:15:58.324034 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:15:58.324044 | orchestrator | 2026-04-05 05:15:58.324055 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:15:58.324066 | orchestrator | Sunday 05 April 2026 05:15:52 +0000 (0:00:01.654) 0:02:28.881 ********** 2026-04-05 05:15:58.324076 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:15:58.324087 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:15:58.324098 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:15:58.324109 | orchestrator | 2026-04-05 05:15:58.324119 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:15:58.324130 | orchestrator | Sunday 05 April 2026 05:15:53 +0000 (0:00:01.686) 0:02:30.567 ********** 2026-04-05 05:15:58.324140 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 05:15:58.324151 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 05:15:58.324162 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 05:15:58.324172 | orchestrator | 2026-04-05 05:15:58.324183 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:15:58.324193 | orchestrator | Sunday 05 April 2026 05:15:55 +0000 (0:00:01.512) 0:02:32.079 ********** 2026-04-05 05:15:58.324204 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:15:58.324215 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:15:58.324226 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:15:58.324237 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:15:58.324247 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:15:58.324258 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:15:58.324269 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:15:58.324279 | orchestrator | 2026-04-05 05:15:58.324290 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:15:58.324301 | orchestrator | Sunday 05 April 2026 05:15:57 +0000 (0:00:01.916) 0:02:34.002 ********** 2026-04-05 05:15:58.324311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:15:58.324322 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:15:58.324333 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:15:58.324350 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:16:49.151701 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:16:49.151786 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:16:49.151794 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:16:49.151799 | orchestrator | 2026-04-05 05:16:49.151806 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-05 05:16:49.151813 | orchestrator | Sunday 05 April 2026 05:16:00 +0000 (0:00:03.343) 0:02:37.345 ********** 2026-04-05 05:16:49.151818 | orchestrator | changed: [testbed-node-3] 2026-04-05 05:16:49.151824 | orchestrator | changed: [testbed-node-4] 2026-04-05 05:16:49.151829 | orchestrator | changed: [testbed-node-5] 2026-04-05 05:16:49.151834 | orchestrator | changed: [testbed-manager] 2026-04-05 05:16:49.151840 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:16:49.151845 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:16:49.151867 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:16:49.151872 | orchestrator | 2026-04-05 05:16:49.151878 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-05 05:16:49.151894 | orchestrator | Sunday 05 April 2026 05:16:11 +0000 (0:00:11.162) 0:02:48.508 ********** 2026-04-05 05:16:49.151899 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.151904 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.151909 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.151914 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.151919 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.151924 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.151929 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.151934 | orchestrator | 2026-04-05 05:16:49.151939 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-05 05:16:49.151944 | orchestrator | Sunday 05 April 2026 05:16:13 +0000 (0:00:02.207) 0:02:50.715 ********** 2026-04-05 05:16:49.151949 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.151954 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.151959 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.151964 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.151969 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.151974 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.151979 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.151985 | orchestrator | 2026-04-05 05:16:49.151990 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-05 05:16:49.151995 | orchestrator | Sunday 05 April 2026 05:16:16 +0000 (0:00:02.183) 0:02:52.899 ********** 2026-04-05 05:16:49.152000 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:16:49.152005 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:16:49.152010 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:16:49.152015 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152020 | orchestrator | changed: [testbed-node-3] 2026-04-05 05:16:49.152025 | orchestrator | changed: [testbed-node-4] 2026-04-05 05:16:49.152030 | orchestrator | changed: [testbed-node-5] 2026-04-05 05:16:49.152035 | orchestrator | 2026-04-05 05:16:49.152040 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-05 05:16:49.152045 | orchestrator | Sunday 05 April 2026 05:16:19 +0000 (0:00:03.233) 0:02:56.132 ********** 2026-04-05 05:16:49.152051 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-05 05:16:49.152058 | orchestrator | 2026-04-05 05:16:49.152063 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-05 05:16:49.152068 | orchestrator | Sunday 05 April 2026 05:16:22 +0000 (0:00:03.179) 0:02:59.312 ********** 2026-04-05 05:16:49.152073 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152078 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152083 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152088 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152093 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152098 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152103 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152108 | orchestrator | 2026-04-05 05:16:49.152113 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-05 05:16:49.152118 | orchestrator | Sunday 05 April 2026 05:16:24 +0000 (0:00:01.988) 0:03:01.300 ********** 2026-04-05 05:16:49.152124 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152129 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152134 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152139 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152144 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152149 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152154 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152163 | orchestrator | 2026-04-05 05:16:49.152168 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-05 05:16:49.152173 | orchestrator | Sunday 05 April 2026 05:16:26 +0000 (0:00:02.156) 0:03:03.457 ********** 2026-04-05 05:16:49.152178 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152183 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152188 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152193 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152198 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152203 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152209 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152214 | orchestrator | 2026-04-05 05:16:49.152219 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-05 05:16:49.152224 | orchestrator | Sunday 05 April 2026 05:16:28 +0000 (0:00:01.999) 0:03:05.457 ********** 2026-04-05 05:16:49.152229 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152234 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152239 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152249 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152254 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152259 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152265 | orchestrator | 2026-04-05 05:16:49.152281 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-05 05:16:49.152288 | orchestrator | Sunday 05 April 2026 05:16:31 +0000 (0:00:02.686) 0:03:08.143 ********** 2026-04-05 05:16:49.152294 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152300 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152306 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152312 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152318 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152323 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152329 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152335 | orchestrator | 2026-04-05 05:16:49.152341 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-05 05:16:49.152347 | orchestrator | Sunday 05 April 2026 05:16:33 +0000 (0:00:02.097) 0:03:10.241 ********** 2026-04-05 05:16:49.152353 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152358 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152364 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152370 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152376 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152382 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152388 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152394 | orchestrator | 2026-04-05 05:16:49.152403 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-05 05:16:49.152410 | orchestrator | Sunday 05 April 2026 05:16:35 +0000 (0:00:02.204) 0:03:12.446 ********** 2026-04-05 05:16:49.152416 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152421 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152427 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152433 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152439 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152445 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152451 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152457 | orchestrator | 2026-04-05 05:16:49.152463 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-05 05:16:49.152469 | orchestrator | Sunday 05 April 2026 05:16:37 +0000 (0:00:01.899) 0:03:14.346 ********** 2026-04-05 05:16:49.152475 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152481 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152486 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152496 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152502 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152508 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152514 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152520 | orchestrator | 2026-04-05 05:16:49.152526 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-05 05:16:49.152573 | orchestrator | Sunday 05 April 2026 05:16:39 +0000 (0:00:02.112) 0:03:16.458 ********** 2026-04-05 05:16:49.152581 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152587 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152593 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152600 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152606 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152612 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152619 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152628 | orchestrator | 2026-04-05 05:16:49.152636 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-05 05:16:49.152643 | orchestrator | Sunday 05 April 2026 05:16:41 +0000 (0:00:02.196) 0:03:18.656 ********** 2026-04-05 05:16:49.152649 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152655 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152666 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152671 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152676 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152681 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152686 | orchestrator | 2026-04-05 05:16:49.152691 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-05 05:16:49.152696 | orchestrator | Sunday 05 April 2026 05:16:44 +0000 (0:00:02.106) 0:03:20.762 ********** 2026-04-05 05:16:49.152701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152706 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152711 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152716 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152721 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152726 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152731 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152736 | orchestrator | 2026-04-05 05:16:49.152741 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-05 05:16:49.152746 | orchestrator | Sunday 05 April 2026 05:16:46 +0000 (0:00:02.227) 0:03:22.990 ********** 2026-04-05 05:16:49.152751 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152756 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152761 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152766 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152771 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:16:49.152776 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:16:49.152781 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:16:49.152786 | orchestrator | 2026-04-05 05:16:49.152791 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-05 05:16:49.152797 | orchestrator | Sunday 05 April 2026 05:16:48 +0000 (0:00:01.930) 0:03:24.920 ********** 2026-04-05 05:16:49.152802 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:16:49.152807 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:16:49.152812 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:16:49.152818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 05:16:49.152825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 05:16:49.152830 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:16:49.152840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 05:17:12.605468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 05:17:12.605580 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.605670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 05:17:12.605685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 05:17:12.605697 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.605707 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.605719 | orchestrator | 2026-04-05 05:17:12.605731 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-05 05:17:12.605761 | orchestrator | Sunday 05 April 2026 05:16:50 +0000 (0:00:02.343) 0:03:27.264 ********** 2026-04-05 05:17:12.605773 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:12.605784 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:12.605795 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:12.605806 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.605817 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.605827 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.605838 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.605849 | orchestrator | 2026-04-05 05:17:12.605861 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-05 05:17:12.605871 | orchestrator | Sunday 05 April 2026 05:16:52 +0000 (0:00:01.891) 0:03:29.156 ********** 2026-04-05 05:17:12.605882 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:12.605893 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:12.605904 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:12.605915 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.605925 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.605936 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.605947 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.605958 | orchestrator | 2026-04-05 05:17:12.605975 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-05 05:17:12.605994 | orchestrator | Sunday 05 April 2026 05:16:54 +0000 (0:00:02.125) 0:03:31.281 ********** 2026-04-05 05:17:12.606011 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:12.606122 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:12.606144 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:12.606158 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.606169 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.606180 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.606226 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.606238 | orchestrator | 2026-04-05 05:17:12.606249 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-05 05:17:12.606260 | orchestrator | Sunday 05 April 2026 05:16:56 +0000 (0:00:02.026) 0:03:33.307 ********** 2026-04-05 05:17:12.606270 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:12.606281 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:12.606292 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:12.606302 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.606313 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.606323 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.606334 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.606344 | orchestrator | 2026-04-05 05:17:12.606355 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-05 05:17:12.606366 | orchestrator | Sunday 05 April 2026 05:16:58 +0000 (0:00:02.147) 0:03:35.455 ********** 2026-04-05 05:17:12.606414 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:12.606478 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:12.606490 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:12.606501 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.606512 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.606522 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.606533 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.606543 | orchestrator | 2026-04-05 05:17:12.606555 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-05 05:17:12.606575 | orchestrator | Sunday 05 April 2026 05:17:01 +0000 (0:00:02.315) 0:03:37.770 ********** 2026-04-05 05:17:12.606620 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:12.606639 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:12.606656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:12.606673 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.606689 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.606706 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.606724 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.606741 | orchestrator | 2026-04-05 05:17:12.606757 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-05 05:17:12.606772 | orchestrator | Sunday 05 April 2026 05:17:02 +0000 (0:00:01.843) 0:03:39.614 ********** 2026-04-05 05:17:12.606790 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:12.606807 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:12.606825 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:12.606843 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:12.606860 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 05:17:12.606878 | orchestrator | 2026-04-05 05:17:12.606895 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-05 05:17:12.606913 | orchestrator | Sunday 05 April 2026 05:17:05 +0000 (0:00:02.495) 0:03:42.109 ********** 2026-04-05 05:17:12.606930 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:17:12.606950 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:17:12.606968 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:17:12.606986 | orchestrator | 2026-04-05 05:17:12.607006 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-05 05:17:12.607117 | orchestrator | Sunday 05 April 2026 05:17:06 +0000 (0:00:01.355) 0:03:43.465 ********** 2026-04-05 05:17:12.607152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 05:17:12.607166 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 05:17:12.607176 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.607187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 05:17:12.607198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 05:17:12.607210 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.607242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 05:17:12.607262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 05:17:12.607281 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.607298 | orchestrator | 2026-04-05 05:17:12.607318 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-05 05:17:12.607335 | orchestrator | Sunday 05 April 2026 05:17:08 +0000 (0:00:01.435) 0:03:44.901 ********** 2026-04-05 05:17:12.607374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:12.607396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:12.607414 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.607431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:12.607449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:12.607464 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.607481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:12.607499 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:12.607518 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.607534 | orchestrator | 2026-04-05 05:17:12.607551 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-05 05:17:12.607569 | orchestrator | Sunday 05 April 2026 05:17:09 +0000 (0:00:01.405) 0:03:46.307 ********** 2026-04-05 05:17:12.607616 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.607636 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.607656 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.607673 | orchestrator | 2026-04-05 05:17:12.607691 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-05 05:17:12.607711 | orchestrator | Sunday 05 April 2026 05:17:11 +0000 (0:00:01.431) 0:03:47.739 ********** 2026-04-05 05:17:12.607727 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.607745 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:12.607762 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:12.607779 | orchestrator | 2026-04-05 05:17:12.607797 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-05 05:17:12.607815 | orchestrator | Sunday 05 April 2026 05:17:12 +0000 (0:00:01.342) 0:03:49.081 ********** 2026-04-05 05:17:12.607835 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:12.607872 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:18.256305 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:18.257252 | orchestrator | 2026-04-05 05:17:18.257288 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-05 05:17:18.257301 | orchestrator | Sunday 05 April 2026 05:17:13 +0000 (0:00:01.359) 0:03:50.441 ********** 2026-04-05 05:17:18.257313 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:18.257324 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:18.257361 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:18.257373 | orchestrator | 2026-04-05 05:17:18.257384 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-05 05:17:18.257395 | orchestrator | Sunday 05 April 2026 05:17:15 +0000 (0:00:01.348) 0:03:51.790 ********** 2026-04-05 05:17:18.257407 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}) 2026-04-05 05:17:18.257435 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}) 2026-04-05 05:17:18.257446 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}) 2026-04-05 05:17:18.257457 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}) 2026-04-05 05:17:18.257467 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}) 2026-04-05 05:17:18.257478 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}) 2026-04-05 05:17:18.257489 | orchestrator | 2026-04-05 05:17:18.257500 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-05 05:17:18.257511 | orchestrator | Sunday 05 April 2026 05:17:17 +0000 (0:00:02.832) 0:03:54.622 ********** 2026-04-05 05:17:18.257528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2b14998b-6337-5d33-8563-647c08b40df2/osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1775358362.7492392, 'mtime': 1775358362.744239, 'ctime': 1775358362.744239, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2b14998b-6337-5d33-8563-647c08b40df2/osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:18.257565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4671660f-3880-5125-9575-24d25698498a/osd-block-4671660f-3880-5125-9575-24d25698498a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1775358382.2045383, 'mtime': 1775358382.2015383, 'ctime': 1775358382.2015383, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4671660f-3880-5125-9575-24d25698498a/osd-block-4671660f-3880-5125-9575-24d25698498a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:18.257587 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:18.257631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-71b5f103-fb0e-5af6-8506-51783512c8b9/osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1775358362.6206162, 'mtime': 1775358362.614616, 'ctime': 1775358362.614616, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-71b5f103-fb0e-5af6-8506-51783512c8b9/osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:18.257645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8259097b-349e-523a-9f4d-33b374f7dc5d/osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1775358381.1679049, 'mtime': 1775358381.1629047, 'ctime': 1775358381.1629047, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8259097b-349e-523a-9f4d-33b374f7dc5d/osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:18.257658 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:18.257677 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ee367cf6-46c0-523d-847e-ea936940168f/osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1775358363.9681354, 'mtime': 1775358363.9631355, 'ctime': 1775358363.9631355, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ee367cf6-46c0-523d-847e-ea936940168f/osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.267539 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3/osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1775358382.6054237, 'mtime': 1775358382.6004238, 'ctime': 1775358382.6004238, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3/osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.267711 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:24.267741 | orchestrator | 2026-04-05 05:17:24.267761 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-05 05:17:24.267781 | orchestrator | Sunday 05 April 2026 05:17:19 +0000 (0:00:01.430) 0:03:56.052 ********** 2026-04-05 05:17:24.267801 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 05:17:24.267815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 05:17:24.267826 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:24.267837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 05:17:24.267848 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 05:17:24.267858 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:24.267870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 05:17:24.267881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 05:17:24.267891 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:24.267902 | orchestrator | 2026-04-05 05:17:24.267913 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-05 05:17:24.267926 | orchestrator | Sunday 05 April 2026 05:17:20 +0000 (0:00:01.453) 0:03:57.506 ********** 2026-04-05 05:17:24.267939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.267952 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.267985 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:24.267997 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268027 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268039 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:24.268053 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268085 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:24.268098 | orchestrator | 2026-04-05 05:17:24.268111 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-05 05:17:24.268123 | orchestrator | Sunday 05 April 2026 05:17:22 +0000 (0:00:01.442) 0:03:58.949 ********** 2026-04-05 05:17:24.268136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'})  2026-04-05 05:17:24.268149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'})  2026-04-05 05:17:24.268161 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:24.268174 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'})  2026-04-05 05:17:24.268186 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'})  2026-04-05 05:17:24.268197 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:24.268208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'})  2026-04-05 05:17:24.268218 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'})  2026-04-05 05:17:24.268229 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:24.268239 | orchestrator | 2026-04-05 05:17:24.268250 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-05 05:17:24.268261 | orchestrator | Sunday 05 April 2026 05:17:23 +0000 (0:00:01.689) 0:04:00.639 ********** 2026-04-05 05:17:24.268272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2b14998b-6337-5d33-8563-647c08b40df2', 'data_vg': 'ceph-2b14998b-6337-5d33-8563-647c08b40df2'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4671660f-3880-5125-9575-24d25698498a', 'data_vg': 'ceph-4671660f-3880-5125-9575-24d25698498a'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268301 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:24.268312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-71b5f103-fb0e-5af6-8506-51783512c8b9', 'data_vg': 'ceph-71b5f103-fb0e-5af6-8506-51783512c8b9'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8259097b-349e-523a-9f4d-33b374f7dc5d', 'data_vg': 'ceph-8259097b-349e-523a-9f4d-33b374f7dc5d'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268334 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:24.268345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ee367cf6-46c0-523d-847e-ea936940168f', 'data_vg': 'ceph-ee367cf6-46c0-523d-847e-ea936940168f'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:24.268363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-d286f04f-da20-50d3-800d-bbe3052cfbc3', 'data_vg': 'ceph-d286f04f-da20-50d3-800d-bbe3052cfbc3'}, 'ansible_loop_var': 'item'})  2026-04-05 05:17:34.988308 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:34.988420 | orchestrator | 2026-04-05 05:17:34.988438 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-05 05:17:34.988451 | orchestrator | Sunday 05 April 2026 05:17:25 +0000 (0:00:01.480) 0:04:02.119 ********** 2026-04-05 05:17:34.988462 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:34.988473 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:34.988484 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:34.988494 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:34.988505 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:34.988516 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:34.988527 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:34.988537 | orchestrator | 2026-04-05 05:17:34.988565 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-05 05:17:34.988577 | orchestrator | Sunday 05 April 2026 05:17:27 +0000 (0:00:01.902) 0:04:04.022 ********** 2026-04-05 05:17:34.988587 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:34.988598 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:34.988608 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:34.988618 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:34.988630 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 05:17:34.988697 | orchestrator | 2026-04-05 05:17:34.988710 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-05 05:17:34.988721 | orchestrator | Sunday 05 April 2026 05:17:29 +0000 (0:00:02.526) 0:04:06.548 ********** 2026-04-05 05:17:34.988732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988839 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:34.988861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988941 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:34.988954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.988992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989017 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:34.989030 | orchestrator | 2026-04-05 05:17:34.989043 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-05 05:17:34.989056 | orchestrator | Sunday 05 April 2026 05:17:31 +0000 (0:00:01.596) 0:04:08.144 ********** 2026-04-05 05:17:34.989069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989218 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:34.989229 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:34.989240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989293 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:34.989304 | orchestrator | 2026-04-05 05:17:34.989315 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-05 05:17:34.989326 | orchestrator | Sunday 05 April 2026 05:17:33 +0000 (0:00:01.736) 0:04:09.881 ********** 2026-04-05 05:17:34.989337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989390 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:34.989401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989454 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:34.989465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 05:17:34.989519 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:34.989536 | orchestrator | 2026-04-05 05:17:34.989547 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-05 05:17:34.989558 | orchestrator | Sunday 05 April 2026 05:17:34 +0000 (0:00:01.490) 0:04:11.372 ********** 2026-04-05 05:17:34.989569 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:34.989579 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:34.989597 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.623164 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:49.623272 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:49.623287 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:49.623298 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:49.623309 | orchestrator | 2026-04-05 05:17:49.623321 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-05 05:17:49.623333 | orchestrator | Sunday 05 April 2026 05:17:36 +0000 (0:00:01.917) 0:04:13.289 ********** 2026-04-05 05:17:49.623344 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:49.623354 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:49.623365 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.623393 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:49.623404 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:49.623415 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:49.623426 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:49.623436 | orchestrator | 2026-04-05 05:17:49.623448 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-05 05:17:49.623459 | orchestrator | Sunday 05 April 2026 05:17:38 +0000 (0:00:02.083) 0:04:15.373 ********** 2026-04-05 05:17:49.623469 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:49.623480 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:49.623491 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.623501 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:49.623512 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:49.623522 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:49.623533 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:49.623544 | orchestrator | 2026-04-05 05:17:49.623555 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-05 05:17:49.623566 | orchestrator | Sunday 05 April 2026 05:17:40 +0000 (0:00:01.978) 0:04:17.352 ********** 2026-04-05 05:17:49.623577 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:49.623588 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:49.623598 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.623609 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:49.623619 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:49.623630 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:49.623641 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:49.623651 | orchestrator | 2026-04-05 05:17:49.623662 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-05 05:17:49.623705 | orchestrator | Sunday 05 April 2026 05:17:42 +0000 (0:00:01.855) 0:04:19.207 ********** 2026-04-05 05:17:49.623720 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:49.623733 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:49.623746 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.623759 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:49.623771 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:49.623783 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:49.623796 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:49.623808 | orchestrator | 2026-04-05 05:17:49.623822 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-05 05:17:49.623834 | orchestrator | Sunday 05 April 2026 05:17:44 +0000 (0:00:02.155) 0:04:21.363 ********** 2026-04-05 05:17:49.623847 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:49.623860 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:49.623872 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.623908 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:49.623922 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:49.623934 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:49.623947 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:49.623959 | orchestrator | 2026-04-05 05:17:49.623973 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-05 05:17:49.623985 | orchestrator | Sunday 05 April 2026 05:17:46 +0000 (0:00:01.900) 0:04:23.264 ********** 2026-04-05 05:17:49.623997 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:49.624010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:49.624022 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.624034 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:49.624047 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:49.624059 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:49.624070 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:49.624081 | orchestrator | 2026-04-05 05:17:49.624092 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-05 05:17:49.624102 | orchestrator | Sunday 05 April 2026 05:17:48 +0000 (0:00:02.130) 0:04:25.395 ********** 2026-04-05 05:17:49.624115 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:49.624127 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:49.624140 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:49.624152 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:49.624164 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:49.624177 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:49.624188 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:49.624217 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:49.624228 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:49.624245 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:49.624256 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:49.624267 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:49.624278 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:49.624289 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:49.624300 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:49.624318 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:49.624328 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:49.624339 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:49.624350 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:49.624361 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:49.624372 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:49.624382 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:49.624393 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:49.624404 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:49.624414 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:49.624425 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:49.624436 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:49.624447 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:49.624458 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:49.624468 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:49.624485 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:54.028626 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:54.028797 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:54.028818 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:54.028850 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:54.028862 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:54.028896 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:54.028907 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:54.028918 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:54.028929 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:54.028941 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:54.028952 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:54.028963 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:54.028974 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:54.028985 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:54.028995 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:54.029006 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:54.029017 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:54.029027 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:54.029038 | orchestrator | 2026-04-05 05:17:54.029050 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-05 05:17:54.029062 | orchestrator | Sunday 05 April 2026 05:17:51 +0000 (0:00:02.391) 0:04:27.786 ********** 2026-04-05 05:17:54.029073 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:54.029083 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:54.029094 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:54.029104 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:17:54.029115 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:17:54.029125 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:17:54.029135 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:17:54.029146 | orchestrator | 2026-04-05 05:17:54.029157 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-05 05:17:54.029170 | orchestrator | Sunday 05 April 2026 05:17:53 +0000 (0:00:02.145) 0:04:29.931 ********** 2026-04-05 05:17:54.029183 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:54.029196 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:54.029211 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:54.029249 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:54.029262 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:54.029275 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:54.029288 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:54.029301 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:54.029314 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:54.029327 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:54.029340 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:54.029353 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:54.029365 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:17:54.029378 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:54.029391 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:54.029403 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:54.029416 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:54.029428 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:54.029441 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:17:54.029454 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:17:54.029467 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:54.029480 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:17:54.029492 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:54.029505 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:17:54.029524 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:17:54.029640 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:17:54.029663 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:17:54.029675 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:17:54.029720 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:18:21.895950 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:18:21.896042 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:18:21.896050 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:18:21.896056 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:18:21.896061 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896067 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:18:21.896073 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:18:21.896077 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:18:21.896081 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896086 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-05 05:18:21.896090 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-05 05:18:21.896094 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:18:21.896098 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-05 05:18:21.896102 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-05 05:18:21.896106 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:18:21.896111 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:18:21.896130 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896134 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-05 05:18:21.896138 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-05 05:18:21.896143 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896147 | orchestrator | 2026-04-05 05:18:21.896152 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-05 05:18:21.896158 | orchestrator | Sunday 05 April 2026 05:17:55 +0000 (0:00:02.326) 0:04:32.258 ********** 2026-04-05 05:18:21.896162 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896166 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:18:21.896170 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:18:21.896175 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896179 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896183 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896187 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896191 | orchestrator | 2026-04-05 05:18:21.896195 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-05 05:18:21.896199 | orchestrator | Sunday 05 April 2026 05:17:57 +0000 (0:00:02.150) 0:04:34.408 ********** 2026-04-05 05:18:21.896203 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896207 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:18:21.896211 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:18:21.896215 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896219 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896223 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896228 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896232 | orchestrator | 2026-04-05 05:18:21.896236 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-05 05:18:21.896250 | orchestrator | Sunday 05 April 2026 05:17:59 +0000 (0:00:01.951) 0:04:36.360 ********** 2026-04-05 05:18:21.896255 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896259 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:18:21.896263 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:18:21.896267 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896271 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896275 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896282 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896286 | orchestrator | 2026-04-05 05:18:21.896290 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-05 05:18:21.896294 | orchestrator | Sunday 05 April 2026 05:18:02 +0000 (0:00:02.496) 0:04:38.857 ********** 2026-04-05 05:18:21.896298 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-05 05:18:21.896304 | orchestrator | 2026-04-05 05:18:21.896308 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-05 05:18:21.896312 | orchestrator | Sunday 05 April 2026 05:18:04 +0000 (0:00:02.829) 0:04:41.687 ********** 2026-04-05 05:18:21.896317 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-05 05:18:21.896321 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-05 05:18:21.896325 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-05 05:18:21.896329 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-05 05:18:21.896337 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-05 05:18:21.896342 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-05 05:18:21.896346 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-05 05:18:21.896350 | orchestrator | 2026-04-05 05:18:21.896354 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-05 05:18:21.896358 | orchestrator | Sunday 05 April 2026 05:18:07 +0000 (0:00:02.166) 0:04:43.854 ********** 2026-04-05 05:18:21.896362 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896366 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:18:21.896370 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:18:21.896374 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896378 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896382 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896386 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896390 | orchestrator | 2026-04-05 05:18:21.896394 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-05 05:18:21.896398 | orchestrator | Sunday 05 April 2026 05:18:09 +0000 (0:00:02.320) 0:04:46.174 ********** 2026-04-05 05:18:21.896402 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896406 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:18:21.896411 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:18:21.896414 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896419 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896423 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896427 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896431 | orchestrator | 2026-04-05 05:18:21.896435 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-05 05:18:21.896439 | orchestrator | Sunday 05 April 2026 05:18:11 +0000 (0:00:02.076) 0:04:48.250 ********** 2026-04-05 05:18:21.896443 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:18:21.896448 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:18:21.896452 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:18:21.896456 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:18:21.896460 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:18:21.896464 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:18:21.896468 | orchestrator | ok: [testbed-manager] 2026-04-05 05:18:21.896472 | orchestrator | 2026-04-05 05:18:21.896476 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-05 05:18:21.896480 | orchestrator | Sunday 05 April 2026 05:18:14 +0000 (0:00:02.613) 0:04:50.864 ********** 2026-04-05 05:18:21.896484 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896488 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:18:21.896492 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:18:21.896496 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896500 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896504 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896508 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896512 | orchestrator | 2026-04-05 05:18:21.896516 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-05 05:18:21.896521 | orchestrator | Sunday 05 April 2026 05:18:16 +0000 (0:00:02.419) 0:04:53.284 ********** 2026-04-05 05:18:21.896526 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896531 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:18:21.896536 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:18:21.896540 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:18:21.896545 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:18:21.896550 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:18:21.896555 | orchestrator | skipping: [testbed-manager] 2026-04-05 05:18:21.896560 | orchestrator | 2026-04-05 05:18:21.896565 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-05 05:18:21.896573 | orchestrator | Sunday 05 April 2026 05:18:19 +0000 (0:00:02.463) 0:04:55.747 ********** 2026-04-05 05:18:21.896578 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:18:21.896583 | orchestrator | 2026-04-05 05:18:21.896587 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-05 05:18:21.896592 | orchestrator | Sunday 05 April 2026 05:18:21 +0000 (0:00:02.669) 0:04:58.417 ********** 2026-04-05 05:18:21.896597 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:18:21.896602 | orchestrator | 2026-04-05 05:18:21.896609 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-05 05:19:01.518377 | orchestrator | 2026-04-05 05:19:01.518495 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:19:01.518512 | orchestrator | Sunday 05 April 2026 05:18:23 +0000 (0:00:01.441) 0:04:59.858 ********** 2026-04-05 05:19:01.518524 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.518537 | orchestrator | 2026-04-05 05:19:01.518548 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:19:01.518575 | orchestrator | Sunday 05 April 2026 05:18:24 +0000 (0:00:01.446) 0:05:01.304 ********** 2026-04-05 05:19:01.518586 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.518597 | orchestrator | 2026-04-05 05:19:01.518608 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-05 05:19:01.518619 | orchestrator | Sunday 05 April 2026 05:18:25 +0000 (0:00:01.178) 0:05:02.483 ********** 2026-04-05 05:19:01.518632 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:19:01.518646 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:19:01.518658 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 05:19:01.518669 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 05:19:01.518681 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 05:19:01.518694 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}])  2026-04-05 05:19:01.518707 | orchestrator | 2026-04-05 05:19:01.518718 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-05 05:19:01.518750 | orchestrator | 2026-04-05 05:19:01.518762 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-05 05:19:01.518773 | orchestrator | Sunday 05 April 2026 05:18:35 +0000 (0:00:10.191) 0:05:12.675 ********** 2026-04-05 05:19:01.518783 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.518794 | orchestrator | 2026-04-05 05:19:01.518804 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-05 05:19:01.518815 | orchestrator | Sunday 05 April 2026 05:18:37 +0000 (0:00:01.531) 0:05:14.206 ********** 2026-04-05 05:19:01.518826 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.518870 | orchestrator | 2026-04-05 05:19:01.518882 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-05 05:19:01.518892 | orchestrator | Sunday 05 April 2026 05:18:38 +0000 (0:00:01.156) 0:05:15.363 ********** 2026-04-05 05:19:01.518906 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:01.518926 | orchestrator | 2026-04-05 05:19:01.518944 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-05 05:19:01.518962 | orchestrator | Sunday 05 April 2026 05:18:39 +0000 (0:00:01.140) 0:05:16.504 ********** 2026-04-05 05:19:01.518979 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.518998 | orchestrator | 2026-04-05 05:19:01.519017 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:19:01.519035 | orchestrator | Sunday 05 April 2026 05:18:40 +0000 (0:00:01.172) 0:05:17.676 ********** 2026-04-05 05:19:01.519055 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-05 05:19:01.519075 | orchestrator | 2026-04-05 05:19:01.519094 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:19:01.519131 | orchestrator | Sunday 05 April 2026 05:18:42 +0000 (0:00:01.143) 0:05:18.820 ********** 2026-04-05 05:19:01.519145 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519158 | orchestrator | 2026-04-05 05:19:01.519171 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:19:01.519183 | orchestrator | Sunday 05 April 2026 05:18:43 +0000 (0:00:01.456) 0:05:20.276 ********** 2026-04-05 05:19:01.519193 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519204 | orchestrator | 2026-04-05 05:19:01.519222 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:19:01.519233 | orchestrator | Sunday 05 April 2026 05:18:44 +0000 (0:00:01.160) 0:05:21.436 ********** 2026-04-05 05:19:01.519257 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519268 | orchestrator | 2026-04-05 05:19:01.519279 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:19:01.519290 | orchestrator | Sunday 05 April 2026 05:18:46 +0000 (0:00:01.450) 0:05:22.887 ********** 2026-04-05 05:19:01.519300 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519311 | orchestrator | 2026-04-05 05:19:01.519321 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:19:01.519332 | orchestrator | Sunday 05 April 2026 05:18:47 +0000 (0:00:01.163) 0:05:24.050 ********** 2026-04-05 05:19:01.519342 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519353 | orchestrator | 2026-04-05 05:19:01.519364 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:19:01.519374 | orchestrator | Sunday 05 April 2026 05:18:48 +0000 (0:00:01.122) 0:05:25.172 ********** 2026-04-05 05:19:01.519385 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519395 | orchestrator | 2026-04-05 05:19:01.519406 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:19:01.519417 | orchestrator | Sunday 05 April 2026 05:18:49 +0000 (0:00:01.136) 0:05:26.309 ********** 2026-04-05 05:19:01.519428 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:01.519438 | orchestrator | 2026-04-05 05:19:01.519449 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:19:01.519460 | orchestrator | Sunday 05 April 2026 05:18:50 +0000 (0:00:01.186) 0:05:27.495 ********** 2026-04-05 05:19:01.519481 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519492 | orchestrator | 2026-04-05 05:19:01.519503 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:19:01.519513 | orchestrator | Sunday 05 April 2026 05:18:51 +0000 (0:00:01.114) 0:05:28.610 ********** 2026-04-05 05:19:01.519524 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:19:01.519535 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:19:01.519545 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:19:01.519556 | orchestrator | 2026-04-05 05:19:01.519566 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:19:01.519577 | orchestrator | Sunday 05 April 2026 05:18:53 +0000 (0:00:01.754) 0:05:30.365 ********** 2026-04-05 05:19:01.519587 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:01.519598 | orchestrator | 2026-04-05 05:19:01.519609 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:19:01.519619 | orchestrator | Sunday 05 April 2026 05:18:54 +0000 (0:00:01.249) 0:05:31.614 ********** 2026-04-05 05:19:01.519629 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:19:01.519640 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:19:01.519651 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:19:01.519662 | orchestrator | 2026-04-05 05:19:01.519672 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:19:01.519683 | orchestrator | Sunday 05 April 2026 05:18:57 +0000 (0:00:03.082) 0:05:34.697 ********** 2026-04-05 05:19:01.519693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:19:01.519704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:19:01.519715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:19:01.519726 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:01.519736 | orchestrator | 2026-04-05 05:19:01.519747 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:19:01.519757 | orchestrator | Sunday 05 April 2026 05:18:59 +0000 (0:00:01.413) 0:05:36.111 ********** 2026-04-05 05:19:01.519769 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:19:01.519783 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:19:01.519793 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:19:01.519804 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:01.519815 | orchestrator | 2026-04-05 05:19:01.519826 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:19:01.519870 | orchestrator | Sunday 05 April 2026 05:19:01 +0000 (0:00:02.049) 0:05:38.161 ********** 2026-04-05 05:19:01.519890 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:22.425437 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:22.425589 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:22.425609 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.425623 | orchestrator | 2026-04-05 05:19:22.425635 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:19:22.425647 | orchestrator | Sunday 05 April 2026 05:19:02 +0000 (0:00:01.151) 0:05:39.312 ********** 2026-04-05 05:19:22.425661 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b58ad7ef29db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:18:55.435320', 'end': '2026-04-05 05:18:55.474161', 'delta': '0:00:00.038841', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b58ad7ef29db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:19:22.425677 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '0027b45af4f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:18:55.960366', 'end': '2026-04-05 05:18:56.027489', 'delta': '0:00:00.067123', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0027b45af4f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:19:22.425688 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd0e8f8775caf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:18:56.818018', 'end': '2026-04-05 05:18:56.865100', 'delta': '0:00:00.047082', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0e8f8775caf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:19:22.425699 | orchestrator | 2026-04-05 05:19:22.425711 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:19:22.425721 | orchestrator | Sunday 05 April 2026 05:19:03 +0000 (0:00:01.210) 0:05:40.523 ********** 2026-04-05 05:19:22.425732 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:22.425744 | orchestrator | 2026-04-05 05:19:22.425755 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:19:22.425765 | orchestrator | Sunday 05 April 2026 05:19:05 +0000 (0:00:01.655) 0:05:42.178 ********** 2026-04-05 05:19:22.425776 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.425787 | orchestrator | 2026-04-05 05:19:22.425808 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:19:22.425818 | orchestrator | Sunday 05 April 2026 05:19:06 +0000 (0:00:01.234) 0:05:43.413 ********** 2026-04-05 05:19:22.425829 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:22.425840 | orchestrator | 2026-04-05 05:19:22.425851 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:19:22.425861 | orchestrator | Sunday 05 April 2026 05:19:07 +0000 (0:00:01.219) 0:05:44.633 ********** 2026-04-05 05:19:22.425933 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-05 05:19:22.425950 | orchestrator | 2026-04-05 05:19:22.425962 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:19:22.425974 | orchestrator | Sunday 05 April 2026 05:19:10 +0000 (0:00:02.473) 0:05:47.107 ********** 2026-04-05 05:19:22.425987 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:22.426000 | orchestrator | 2026-04-05 05:19:22.426012 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:19:22.426085 | orchestrator | Sunday 05 April 2026 05:19:11 +0000 (0:00:01.132) 0:05:48.239 ********** 2026-04-05 05:19:22.426098 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426111 | orchestrator | 2026-04-05 05:19:22.426124 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:19:22.426137 | orchestrator | Sunday 05 April 2026 05:19:12 +0000 (0:00:01.145) 0:05:49.385 ********** 2026-04-05 05:19:22.426150 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426162 | orchestrator | 2026-04-05 05:19:22.426173 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:19:22.426183 | orchestrator | Sunday 05 April 2026 05:19:13 +0000 (0:00:01.208) 0:05:50.593 ********** 2026-04-05 05:19:22.426194 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426205 | orchestrator | 2026-04-05 05:19:22.426215 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:19:22.426226 | orchestrator | Sunday 05 April 2026 05:19:15 +0000 (0:00:01.196) 0:05:51.790 ********** 2026-04-05 05:19:22.426237 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426248 | orchestrator | 2026-04-05 05:19:22.426258 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:19:22.426269 | orchestrator | Sunday 05 April 2026 05:19:16 +0000 (0:00:01.160) 0:05:52.950 ********** 2026-04-05 05:19:22.426280 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426291 | orchestrator | 2026-04-05 05:19:22.426302 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:19:22.426312 | orchestrator | Sunday 05 April 2026 05:19:17 +0000 (0:00:01.121) 0:05:54.072 ********** 2026-04-05 05:19:22.426323 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426334 | orchestrator | 2026-04-05 05:19:22.426344 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:19:22.426355 | orchestrator | Sunday 05 April 2026 05:19:18 +0000 (0:00:01.136) 0:05:55.208 ********** 2026-04-05 05:19:22.426365 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426376 | orchestrator | 2026-04-05 05:19:22.426387 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:19:22.426398 | orchestrator | Sunday 05 April 2026 05:19:19 +0000 (0:00:01.110) 0:05:56.319 ********** 2026-04-05 05:19:22.426408 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426419 | orchestrator | 2026-04-05 05:19:22.426430 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:19:22.426441 | orchestrator | Sunday 05 April 2026 05:19:20 +0000 (0:00:01.194) 0:05:57.514 ********** 2026-04-05 05:19:22.426452 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:22.426463 | orchestrator | 2026-04-05 05:19:22.426473 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:19:22.426484 | orchestrator | Sunday 05 April 2026 05:19:21 +0000 (0:00:01.137) 0:05:58.652 ********** 2026-04-05 05:19:22.426495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:22.426517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:22.426528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:22.426540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:19:22.426568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:23.985198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:23.985302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:23.985324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:19:23.985368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:23.985396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:19:23.985408 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:23.985420 | orchestrator | 2026-04-05 05:19:23.985433 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:19:23.985445 | orchestrator | Sunday 05 April 2026 05:19:23 +0000 (0:00:01.944) 0:06:00.597 ********** 2026-04-05 05:19:23.985477 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:23.985491 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:23.985502 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:23.985523 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:23.985535 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:23.985552 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:23.985571 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:49.076647 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:49.076858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:49.076906 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:19:49.077011 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.077035 | orchestrator | 2026-04-05 05:19:49.077055 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:19:49.077076 | orchestrator | Sunday 05 April 2026 05:19:25 +0000 (0:00:01.221) 0:06:01.819 ********** 2026-04-05 05:19:49.077096 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:49.077117 | orchestrator | 2026-04-05 05:19:49.077137 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:19:49.077157 | orchestrator | Sunday 05 April 2026 05:19:26 +0000 (0:00:01.465) 0:06:03.284 ********** 2026-04-05 05:19:49.077177 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:49.077197 | orchestrator | 2026-04-05 05:19:49.077217 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:19:49.077263 | orchestrator | Sunday 05 April 2026 05:19:27 +0000 (0:00:01.141) 0:06:04.425 ********** 2026-04-05 05:19:49.077283 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:19:49.077303 | orchestrator | 2026-04-05 05:19:49.077322 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:19:49.077343 | orchestrator | Sunday 05 April 2026 05:19:29 +0000 (0:00:01.495) 0:06:05.920 ********** 2026-04-05 05:19:49.077363 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.077382 | orchestrator | 2026-04-05 05:19:49.077402 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:19:49.077435 | orchestrator | Sunday 05 April 2026 05:19:30 +0000 (0:00:01.120) 0:06:07.041 ********** 2026-04-05 05:19:49.077454 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.077473 | orchestrator | 2026-04-05 05:19:49.077491 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:19:49.077510 | orchestrator | Sunday 05 April 2026 05:19:31 +0000 (0:00:01.213) 0:06:08.254 ********** 2026-04-05 05:19:49.077528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.077546 | orchestrator | 2026-04-05 05:19:49.077565 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:19:49.077583 | orchestrator | Sunday 05 April 2026 05:19:32 +0000 (0:00:01.132) 0:06:09.387 ********** 2026-04-05 05:19:49.077602 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:19:49.077621 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 05:19:49.077639 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 05:19:49.077658 | orchestrator | 2026-04-05 05:19:49.077677 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:19:49.077695 | orchestrator | Sunday 05 April 2026 05:19:34 +0000 (0:00:01.988) 0:06:11.375 ********** 2026-04-05 05:19:49.077713 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:19:49.077732 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:19:49.077750 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:19:49.077769 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.077788 | orchestrator | 2026-04-05 05:19:49.077807 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:19:49.077826 | orchestrator | Sunday 05 April 2026 05:19:35 +0000 (0:00:01.223) 0:06:12.598 ********** 2026-04-05 05:19:49.077844 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.077863 | orchestrator | 2026-04-05 05:19:49.077882 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:19:49.077900 | orchestrator | Sunday 05 April 2026 05:19:37 +0000 (0:00:01.151) 0:06:13.750 ********** 2026-04-05 05:19:49.077919 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:19:49.077964 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:19:49.077984 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:19:49.078003 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:19:49.078106 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:19:49.078131 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:19:49.078163 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:19:49.078182 | orchestrator | 2026-04-05 05:19:49.078201 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:19:49.078219 | orchestrator | Sunday 05 April 2026 05:19:39 +0000 (0:00:02.136) 0:06:15.886 ********** 2026-04-05 05:19:49.078237 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:19:49.078255 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:19:49.078273 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:19:49.078290 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:19:49.078308 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:19:49.078328 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:19:49.078347 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:19:49.078380 | orchestrator | 2026-04-05 05:19:49.078400 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-05 05:19:49.078420 | orchestrator | Sunday 05 April 2026 05:19:42 +0000 (0:00:02.940) 0:06:18.826 ********** 2026-04-05 05:19:49.078439 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-05 05:19:49.078459 | orchestrator | 2026-04-05 05:19:49.078485 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-05 05:19:49.078504 | orchestrator | Sunday 05 April 2026 05:19:44 +0000 (0:00:02.266) 0:06:21.093 ********** 2026-04-05 05:19:49.078521 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.078538 | orchestrator | 2026-04-05 05:19:49.078556 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-05 05:19:49.078574 | orchestrator | Sunday 05 April 2026 05:19:45 +0000 (0:00:01.217) 0:06:22.311 ********** 2026-04-05 05:19:49.078591 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:19:49.078608 | orchestrator | 2026-04-05 05:19:49.078627 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-05 05:19:49.078645 | orchestrator | Sunday 05 April 2026 05:19:46 +0000 (0:00:01.162) 0:06:23.473 ********** 2026-04-05 05:19:49.078662 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-05 05:19:49.078682 | orchestrator | 2026-04-05 05:19:49.078701 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-05 05:19:49.078736 | orchestrator | Sunday 05 April 2026 05:19:49 +0000 (0:00:02.307) 0:06:25.780 ********** 2026-04-05 05:20:50.289407 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.289535 | orchestrator | 2026-04-05 05:20:50.289553 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-05 05:20:50.289566 | orchestrator | Sunday 05 April 2026 05:19:50 +0000 (0:00:01.099) 0:06:26.880 ********** 2026-04-05 05:20:50.289578 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:20:50.289590 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:20:50.289601 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:20:50.289612 | orchestrator | 2026-04-05 05:20:50.289623 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-05 05:20:50.289634 | orchestrator | Sunday 05 April 2026 05:19:52 +0000 (0:00:02.496) 0:06:29.377 ********** 2026-04-05 05:20:50.289645 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-05 05:20:50.289656 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-05 05:20:50.289667 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-05 05:20:50.289678 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-05 05:20:50.289689 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-05 05:20:50.289700 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-05 05:20:50.289710 | orchestrator | 2026-04-05 05:20:50.289721 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-05 05:20:50.289732 | orchestrator | Sunday 05 April 2026 05:20:06 +0000 (0:00:13.438) 0:06:42.816 ********** 2026-04-05 05:20:50.289743 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:20:50.289754 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:20:50.289764 | orchestrator | 2026-04-05 05:20:50.289775 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-05 05:20:50.289785 | orchestrator | Sunday 05 April 2026 05:20:09 +0000 (0:00:03.842) 0:06:46.658 ********** 2026-04-05 05:20:50.289796 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:20:50.289806 | orchestrator | 2026-04-05 05:20:50.289817 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:20:50.289854 | orchestrator | Sunday 05 April 2026 05:20:12 +0000 (0:00:02.443) 0:06:49.102 ********** 2026-04-05 05:20:50.289865 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-05 05:20:50.289876 | orchestrator | 2026-04-05 05:20:50.289887 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:20:50.289897 | orchestrator | Sunday 05 April 2026 05:20:13 +0000 (0:00:01.430) 0:06:50.533 ********** 2026-04-05 05:20:50.289908 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-05 05:20:50.289918 | orchestrator | 2026-04-05 05:20:50.289929 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:20:50.289942 | orchestrator | Sunday 05 April 2026 05:20:15 +0000 (0:00:01.562) 0:06:52.096 ********** 2026-04-05 05:20:50.289955 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.289968 | orchestrator | 2026-04-05 05:20:50.289981 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:20:50.289993 | orchestrator | Sunday 05 April 2026 05:20:16 +0000 (0:00:01.478) 0:06:53.574 ********** 2026-04-05 05:20:50.290004 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290098 | orchestrator | 2026-04-05 05:20:50.290113 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:20:50.290128 | orchestrator | Sunday 05 April 2026 05:20:17 +0000 (0:00:01.129) 0:06:54.703 ********** 2026-04-05 05:20:50.290141 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290153 | orchestrator | 2026-04-05 05:20:50.290165 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:20:50.290177 | orchestrator | Sunday 05 April 2026 05:20:19 +0000 (0:00:01.112) 0:06:55.816 ********** 2026-04-05 05:20:50.290189 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290202 | orchestrator | 2026-04-05 05:20:50.290215 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:20:50.290227 | orchestrator | Sunday 05 April 2026 05:20:20 +0000 (0:00:01.113) 0:06:56.930 ********** 2026-04-05 05:20:50.290239 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.290251 | orchestrator | 2026-04-05 05:20:50.290263 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:20:50.290290 | orchestrator | Sunday 05 April 2026 05:20:21 +0000 (0:00:01.589) 0:06:58.520 ********** 2026-04-05 05:20:50.290303 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290316 | orchestrator | 2026-04-05 05:20:50.290328 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:20:50.290339 | orchestrator | Sunday 05 April 2026 05:20:22 +0000 (0:00:01.139) 0:06:59.659 ********** 2026-04-05 05:20:50.290349 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290360 | orchestrator | 2026-04-05 05:20:50.290370 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:20:50.290381 | orchestrator | Sunday 05 April 2026 05:20:24 +0000 (0:00:01.111) 0:07:00.771 ********** 2026-04-05 05:20:50.290391 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.290402 | orchestrator | 2026-04-05 05:20:50.290412 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:20:50.290423 | orchestrator | Sunday 05 April 2026 05:20:25 +0000 (0:00:01.510) 0:07:02.282 ********** 2026-04-05 05:20:50.290433 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.290444 | orchestrator | 2026-04-05 05:20:50.290472 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:20:50.290484 | orchestrator | Sunday 05 April 2026 05:20:27 +0000 (0:00:01.544) 0:07:03.827 ********** 2026-04-05 05:20:50.290494 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290505 | orchestrator | 2026-04-05 05:20:50.290516 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:20:50.290527 | orchestrator | Sunday 05 April 2026 05:20:28 +0000 (0:00:01.168) 0:07:04.995 ********** 2026-04-05 05:20:50.290537 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.290560 | orchestrator | 2026-04-05 05:20:50.290570 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:20:50.290581 | orchestrator | Sunday 05 April 2026 05:20:29 +0000 (0:00:01.283) 0:07:06.278 ********** 2026-04-05 05:20:50.290592 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290602 | orchestrator | 2026-04-05 05:20:50.290613 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:20:50.290623 | orchestrator | Sunday 05 April 2026 05:20:30 +0000 (0:00:01.170) 0:07:07.449 ********** 2026-04-05 05:20:50.290634 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290645 | orchestrator | 2026-04-05 05:20:50.290655 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:20:50.290666 | orchestrator | Sunday 05 April 2026 05:20:31 +0000 (0:00:01.181) 0:07:08.631 ********** 2026-04-05 05:20:50.290676 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290687 | orchestrator | 2026-04-05 05:20:50.290697 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:20:50.290708 | orchestrator | Sunday 05 April 2026 05:20:33 +0000 (0:00:01.096) 0:07:09.727 ********** 2026-04-05 05:20:50.290718 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290729 | orchestrator | 2026-04-05 05:20:50.290739 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:20:50.290750 | orchestrator | Sunday 05 April 2026 05:20:34 +0000 (0:00:01.098) 0:07:10.826 ********** 2026-04-05 05:20:50.290760 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290771 | orchestrator | 2026-04-05 05:20:50.290782 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:20:50.290792 | orchestrator | Sunday 05 April 2026 05:20:35 +0000 (0:00:01.144) 0:07:11.970 ********** 2026-04-05 05:20:50.290803 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.290813 | orchestrator | 2026-04-05 05:20:50.290824 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:20:50.290834 | orchestrator | Sunday 05 April 2026 05:20:36 +0000 (0:00:01.140) 0:07:13.110 ********** 2026-04-05 05:20:50.290845 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.290859 | orchestrator | 2026-04-05 05:20:50.290870 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:20:50.290881 | orchestrator | Sunday 05 April 2026 05:20:37 +0000 (0:00:01.144) 0:07:14.255 ********** 2026-04-05 05:20:50.290892 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:20:50.290902 | orchestrator | 2026-04-05 05:20:50.290913 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:20:50.290923 | orchestrator | Sunday 05 April 2026 05:20:38 +0000 (0:00:01.148) 0:07:15.404 ********** 2026-04-05 05:20:50.290934 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290945 | orchestrator | 2026-04-05 05:20:50.290955 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:20:50.290966 | orchestrator | Sunday 05 April 2026 05:20:39 +0000 (0:00:01.139) 0:07:16.544 ********** 2026-04-05 05:20:50.290976 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.290987 | orchestrator | 2026-04-05 05:20:50.290997 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:20:50.291008 | orchestrator | Sunday 05 April 2026 05:20:40 +0000 (0:00:01.122) 0:07:17.666 ********** 2026-04-05 05:20:50.291018 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291029 | orchestrator | 2026-04-05 05:20:50.291058 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:20:50.291078 | orchestrator | Sunday 05 April 2026 05:20:42 +0000 (0:00:01.180) 0:07:18.847 ********** 2026-04-05 05:20:50.291090 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291100 | orchestrator | 2026-04-05 05:20:50.291111 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:20:50.291122 | orchestrator | Sunday 05 April 2026 05:20:43 +0000 (0:00:01.106) 0:07:19.953 ********** 2026-04-05 05:20:50.291132 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291150 | orchestrator | 2026-04-05 05:20:50.291161 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:20:50.291171 | orchestrator | Sunday 05 April 2026 05:20:44 +0000 (0:00:01.149) 0:07:21.102 ********** 2026-04-05 05:20:50.291182 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291192 | orchestrator | 2026-04-05 05:20:50.291203 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:20:50.291213 | orchestrator | Sunday 05 April 2026 05:20:45 +0000 (0:00:01.332) 0:07:22.435 ********** 2026-04-05 05:20:50.291230 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291240 | orchestrator | 2026-04-05 05:20:50.291251 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:20:50.291262 | orchestrator | Sunday 05 April 2026 05:20:46 +0000 (0:00:01.117) 0:07:23.553 ********** 2026-04-05 05:20:50.291272 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291283 | orchestrator | 2026-04-05 05:20:50.291294 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:20:50.291304 | orchestrator | Sunday 05 April 2026 05:20:47 +0000 (0:00:01.145) 0:07:24.699 ********** 2026-04-05 05:20:50.291315 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291325 | orchestrator | 2026-04-05 05:20:50.291336 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:20:50.291346 | orchestrator | Sunday 05 April 2026 05:20:49 +0000 (0:00:01.139) 0:07:25.838 ********** 2026-04-05 05:20:50.291357 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:20:50.291367 | orchestrator | 2026-04-05 05:20:50.291378 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:20:50.291388 | orchestrator | Sunday 05 April 2026 05:20:50 +0000 (0:00:01.155) 0:07:26.993 ********** 2026-04-05 05:21:40.601115 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.601258 | orchestrator | 2026-04-05 05:21:40.601276 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:21:40.601288 | orchestrator | Sunday 05 April 2026 05:20:51 +0000 (0:00:01.186) 0:07:28.180 ********** 2026-04-05 05:21:40.601299 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.601310 | orchestrator | 2026-04-05 05:21:40.601322 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:21:40.601334 | orchestrator | Sunday 05 April 2026 05:20:52 +0000 (0:00:01.122) 0:07:29.303 ********** 2026-04-05 05:21:40.601345 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:21:40.601356 | orchestrator | 2026-04-05 05:21:40.601367 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:21:40.601378 | orchestrator | Sunday 05 April 2026 05:20:54 +0000 (0:00:01.951) 0:07:31.254 ********** 2026-04-05 05:21:40.601389 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:21:40.601399 | orchestrator | 2026-04-05 05:21:40.601410 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:21:40.601421 | orchestrator | Sunday 05 April 2026 05:20:56 +0000 (0:00:02.377) 0:07:33.631 ********** 2026-04-05 05:21:40.601432 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-05 05:21:40.601444 | orchestrator | 2026-04-05 05:21:40.601454 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:21:40.601465 | orchestrator | Sunday 05 April 2026 05:20:58 +0000 (0:00:01.458) 0:07:35.090 ********** 2026-04-05 05:21:40.601476 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.601487 | orchestrator | 2026-04-05 05:21:40.601498 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:21:40.601508 | orchestrator | Sunday 05 April 2026 05:20:59 +0000 (0:00:01.137) 0:07:36.227 ********** 2026-04-05 05:21:40.601519 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.601530 | orchestrator | 2026-04-05 05:21:40.601541 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:21:40.601552 | orchestrator | Sunday 05 April 2026 05:21:00 +0000 (0:00:01.329) 0:07:37.557 ********** 2026-04-05 05:21:40.601587 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:21:40.601598 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:21:40.601609 | orchestrator | 2026-04-05 05:21:40.601620 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:21:40.601630 | orchestrator | Sunday 05 April 2026 05:21:02 +0000 (0:00:01.833) 0:07:39.390 ********** 2026-04-05 05:21:40.601641 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:21:40.601652 | orchestrator | 2026-04-05 05:21:40.601664 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:21:40.601677 | orchestrator | Sunday 05 April 2026 05:21:04 +0000 (0:00:01.625) 0:07:41.016 ********** 2026-04-05 05:21:40.601689 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.601702 | orchestrator | 2026-04-05 05:21:40.601715 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:21:40.601728 | orchestrator | Sunday 05 April 2026 05:21:05 +0000 (0:00:01.172) 0:07:42.189 ********** 2026-04-05 05:21:40.601740 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.601752 | orchestrator | 2026-04-05 05:21:40.601765 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:21:40.601777 | orchestrator | Sunday 05 April 2026 05:21:06 +0000 (0:00:01.187) 0:07:43.377 ********** 2026-04-05 05:21:40.601790 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.601802 | orchestrator | 2026-04-05 05:21:40.601815 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:21:40.601827 | orchestrator | Sunday 05 April 2026 05:21:07 +0000 (0:00:01.100) 0:07:44.477 ********** 2026-04-05 05:21:40.601839 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-05 05:21:40.601852 | orchestrator | 2026-04-05 05:21:40.601865 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:21:40.601877 | orchestrator | Sunday 05 April 2026 05:21:09 +0000 (0:00:01.469) 0:07:45.947 ********** 2026-04-05 05:21:40.601889 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:21:40.601902 | orchestrator | 2026-04-05 05:21:40.601914 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:21:40.601927 | orchestrator | Sunday 05 April 2026 05:21:10 +0000 (0:00:01.730) 0:07:47.678 ********** 2026-04-05 05:21:40.601939 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:21:40.601951 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:21:40.601964 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:21:40.601990 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602003 | orchestrator | 2026-04-05 05:21:40.602069 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:21:40.602081 | orchestrator | Sunday 05 April 2026 05:21:12 +0000 (0:00:01.201) 0:07:48.880 ********** 2026-04-05 05:21:40.602092 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602103 | orchestrator | 2026-04-05 05:21:40.602114 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:21:40.602156 | orchestrator | Sunday 05 April 2026 05:21:13 +0000 (0:00:01.108) 0:07:49.988 ********** 2026-04-05 05:21:40.602169 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602180 | orchestrator | 2026-04-05 05:21:40.602190 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:21:40.602201 | orchestrator | Sunday 05 April 2026 05:21:14 +0000 (0:00:01.187) 0:07:51.176 ********** 2026-04-05 05:21:40.602212 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602223 | orchestrator | 2026-04-05 05:21:40.602234 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:21:40.602261 | orchestrator | Sunday 05 April 2026 05:21:15 +0000 (0:00:01.174) 0:07:52.350 ********** 2026-04-05 05:21:40.602283 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602293 | orchestrator | 2026-04-05 05:21:40.602304 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:21:40.602315 | orchestrator | Sunday 05 April 2026 05:21:16 +0000 (0:00:01.159) 0:07:53.510 ********** 2026-04-05 05:21:40.602325 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602336 | orchestrator | 2026-04-05 05:21:40.602347 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:21:40.602357 | orchestrator | Sunday 05 April 2026 05:21:18 +0000 (0:00:01.328) 0:07:54.838 ********** 2026-04-05 05:21:40.602368 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:21:40.602379 | orchestrator | 2026-04-05 05:21:40.602389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:21:40.602400 | orchestrator | Sunday 05 April 2026 05:21:20 +0000 (0:00:02.665) 0:07:57.504 ********** 2026-04-05 05:21:40.602410 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:21:40.602421 | orchestrator | 2026-04-05 05:21:40.602432 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:21:40.602442 | orchestrator | Sunday 05 April 2026 05:21:21 +0000 (0:00:01.130) 0:07:58.635 ********** 2026-04-05 05:21:40.602453 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-05 05:21:40.602464 | orchestrator | 2026-04-05 05:21:40.602474 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:21:40.602485 | orchestrator | Sunday 05 April 2026 05:21:23 +0000 (0:00:01.488) 0:08:00.123 ********** 2026-04-05 05:21:40.602496 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602506 | orchestrator | 2026-04-05 05:21:40.602517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:21:40.602527 | orchestrator | Sunday 05 April 2026 05:21:24 +0000 (0:00:01.128) 0:08:01.252 ********** 2026-04-05 05:21:40.602538 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602548 | orchestrator | 2026-04-05 05:21:40.602559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:21:40.602570 | orchestrator | Sunday 05 April 2026 05:21:25 +0000 (0:00:01.167) 0:08:02.420 ********** 2026-04-05 05:21:40.602580 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602591 | orchestrator | 2026-04-05 05:21:40.602601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:21:40.602612 | orchestrator | Sunday 05 April 2026 05:21:26 +0000 (0:00:01.195) 0:08:03.615 ********** 2026-04-05 05:21:40.602622 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602633 | orchestrator | 2026-04-05 05:21:40.602644 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:21:40.602655 | orchestrator | Sunday 05 April 2026 05:21:28 +0000 (0:00:01.151) 0:08:04.766 ********** 2026-04-05 05:21:40.602666 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602676 | orchestrator | 2026-04-05 05:21:40.602687 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:21:40.602697 | orchestrator | Sunday 05 April 2026 05:21:29 +0000 (0:00:01.138) 0:08:05.905 ********** 2026-04-05 05:21:40.602708 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602718 | orchestrator | 2026-04-05 05:21:40.602729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:21:40.602739 | orchestrator | Sunday 05 April 2026 05:21:30 +0000 (0:00:01.161) 0:08:07.066 ********** 2026-04-05 05:21:40.602750 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602760 | orchestrator | 2026-04-05 05:21:40.602771 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:21:40.602782 | orchestrator | Sunday 05 April 2026 05:21:31 +0000 (0:00:01.155) 0:08:08.222 ********** 2026-04-05 05:21:40.602792 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:21:40.602803 | orchestrator | 2026-04-05 05:21:40.602813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:21:40.602824 | orchestrator | Sunday 05 April 2026 05:21:32 +0000 (0:00:01.148) 0:08:09.370 ********** 2026-04-05 05:21:40.602841 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:21:40.602852 | orchestrator | 2026-04-05 05:21:40.602862 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:21:40.602873 | orchestrator | Sunday 05 April 2026 05:21:33 +0000 (0:00:01.208) 0:08:10.578 ********** 2026-04-05 05:21:40.602883 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-05 05:21:40.602894 | orchestrator | 2026-04-05 05:21:40.602904 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:21:40.602915 | orchestrator | Sunday 05 April 2026 05:21:35 +0000 (0:00:01.454) 0:08:12.033 ********** 2026-04-05 05:21:40.602925 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-05 05:21:40.602937 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-05 05:21:40.602947 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-05 05:21:40.602963 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-05 05:21:40.602974 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-05 05:21:40.602985 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-05 05:21:40.602995 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-05 05:21:40.603006 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:21:40.603017 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:21:40.603027 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:21:40.603038 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:21:40.603048 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:21:40.603059 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:21:40.603070 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:21:40.603086 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-05 05:22:29.418789 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-05 05:22:29.418930 | orchestrator | 2026-04-05 05:22:29.418959 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:22:29.418981 | orchestrator | Sunday 05 April 2026 05:21:42 +0000 (0:00:06.746) 0:08:18.780 ********** 2026-04-05 05:22:29.419002 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419023 | orchestrator | 2026-04-05 05:22:29.419043 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:22:29.419066 | orchestrator | Sunday 05 April 2026 05:21:43 +0000 (0:00:01.116) 0:08:19.896 ********** 2026-04-05 05:22:29.419086 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419106 | orchestrator | 2026-04-05 05:22:29.419118 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:22:29.419129 | orchestrator | Sunday 05 April 2026 05:21:44 +0000 (0:00:01.103) 0:08:21.000 ********** 2026-04-05 05:22:29.419140 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419151 | orchestrator | 2026-04-05 05:22:29.419162 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:22:29.419173 | orchestrator | Sunday 05 April 2026 05:21:45 +0000 (0:00:01.151) 0:08:22.151 ********** 2026-04-05 05:22:29.419184 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419194 | orchestrator | 2026-04-05 05:22:29.419236 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:22:29.419248 | orchestrator | Sunday 05 April 2026 05:21:46 +0000 (0:00:01.117) 0:08:23.269 ********** 2026-04-05 05:22:29.419259 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419270 | orchestrator | 2026-04-05 05:22:29.419281 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:22:29.419292 | orchestrator | Sunday 05 April 2026 05:21:47 +0000 (0:00:01.131) 0:08:24.400 ********** 2026-04-05 05:22:29.419333 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419347 | orchestrator | 2026-04-05 05:22:29.419360 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:22:29.419373 | orchestrator | Sunday 05 April 2026 05:21:48 +0000 (0:00:01.168) 0:08:25.569 ********** 2026-04-05 05:22:29.419386 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419398 | orchestrator | 2026-04-05 05:22:29.419411 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:22:29.419423 | orchestrator | Sunday 05 April 2026 05:21:50 +0000 (0:00:01.173) 0:08:26.742 ********** 2026-04-05 05:22:29.419436 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419448 | orchestrator | 2026-04-05 05:22:29.419460 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:22:29.419473 | orchestrator | Sunday 05 April 2026 05:21:51 +0000 (0:00:01.169) 0:08:27.912 ********** 2026-04-05 05:22:29.419485 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419498 | orchestrator | 2026-04-05 05:22:29.419510 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:22:29.419522 | orchestrator | Sunday 05 April 2026 05:21:52 +0000 (0:00:01.139) 0:08:29.051 ********** 2026-04-05 05:22:29.419534 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419547 | orchestrator | 2026-04-05 05:22:29.419560 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:22:29.419577 | orchestrator | Sunday 05 April 2026 05:21:53 +0000 (0:00:01.106) 0:08:30.158 ********** 2026-04-05 05:22:29.419597 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419617 | orchestrator | 2026-04-05 05:22:29.419639 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:22:29.419660 | orchestrator | Sunday 05 April 2026 05:21:54 +0000 (0:00:01.156) 0:08:31.314 ********** 2026-04-05 05:22:29.419680 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419692 | orchestrator | 2026-04-05 05:22:29.419702 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:22:29.419713 | orchestrator | Sunday 05 April 2026 05:21:55 +0000 (0:00:01.149) 0:08:32.464 ********** 2026-04-05 05:22:29.419724 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419734 | orchestrator | 2026-04-05 05:22:29.419745 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:22:29.419756 | orchestrator | Sunday 05 April 2026 05:21:56 +0000 (0:00:01.252) 0:08:33.717 ********** 2026-04-05 05:22:29.419766 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419777 | orchestrator | 2026-04-05 05:22:29.419787 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:22:29.419798 | orchestrator | Sunday 05 April 2026 05:21:58 +0000 (0:00:01.119) 0:08:34.837 ********** 2026-04-05 05:22:29.419808 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419819 | orchestrator | 2026-04-05 05:22:29.419829 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:22:29.419855 | orchestrator | Sunday 05 April 2026 05:21:59 +0000 (0:00:01.248) 0:08:36.085 ********** 2026-04-05 05:22:29.419867 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419877 | orchestrator | 2026-04-05 05:22:29.419888 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:22:29.419899 | orchestrator | Sunday 05 April 2026 05:22:00 +0000 (0:00:01.242) 0:08:37.328 ********** 2026-04-05 05:22:29.419969 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.419980 | orchestrator | 2026-04-05 05:22:29.419991 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:22:29.420004 | orchestrator | Sunday 05 April 2026 05:22:01 +0000 (0:00:01.160) 0:08:38.489 ********** 2026-04-05 05:22:29.420014 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420025 | orchestrator | 2026-04-05 05:22:29.420036 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:22:29.420056 | orchestrator | Sunday 05 April 2026 05:22:02 +0000 (0:00:01.119) 0:08:39.608 ********** 2026-04-05 05:22:29.420067 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420078 | orchestrator | 2026-04-05 05:22:29.420108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:22:29.420120 | orchestrator | Sunday 05 April 2026 05:22:04 +0000 (0:00:01.197) 0:08:40.806 ********** 2026-04-05 05:22:29.420131 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420142 | orchestrator | 2026-04-05 05:22:29.420153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:22:29.420163 | orchestrator | Sunday 05 April 2026 05:22:05 +0000 (0:00:01.133) 0:08:41.940 ********** 2026-04-05 05:22:29.420174 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420185 | orchestrator | 2026-04-05 05:22:29.420195 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:22:29.420229 | orchestrator | Sunday 05 April 2026 05:22:06 +0000 (0:00:01.148) 0:08:43.088 ********** 2026-04-05 05:22:29.420240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:22:29.420251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:22:29.420261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:22:29.420272 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420282 | orchestrator | 2026-04-05 05:22:29.420293 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:22:29.420304 | orchestrator | Sunday 05 April 2026 05:22:08 +0000 (0:00:01.948) 0:08:45.037 ********** 2026-04-05 05:22:29.420314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:22:29.420325 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:22:29.420335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:22:29.420346 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420356 | orchestrator | 2026-04-05 05:22:29.420367 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:22:29.420377 | orchestrator | Sunday 05 April 2026 05:22:09 +0000 (0:00:01.425) 0:08:46.463 ********** 2026-04-05 05:22:29.420388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:22:29.420399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:22:29.420409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:22:29.420419 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420430 | orchestrator | 2026-04-05 05:22:29.420441 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:22:29.420452 | orchestrator | Sunday 05 April 2026 05:22:11 +0000 (0:00:01.499) 0:08:47.962 ********** 2026-04-05 05:22:29.420462 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420473 | orchestrator | 2026-04-05 05:22:29.420484 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:22:29.420494 | orchestrator | Sunday 05 April 2026 05:22:12 +0000 (0:00:01.143) 0:08:49.105 ********** 2026-04-05 05:22:29.420505 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-05 05:22:29.420516 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420526 | orchestrator | 2026-04-05 05:22:29.420537 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:22:29.420548 | orchestrator | Sunday 05 April 2026 05:22:13 +0000 (0:00:01.369) 0:08:50.475 ********** 2026-04-05 05:22:29.420558 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:22:29.420569 | orchestrator | 2026-04-05 05:22:29.420580 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:22:29.420590 | orchestrator | Sunday 05 April 2026 05:22:15 +0000 (0:00:01.842) 0:08:52.317 ********** 2026-04-05 05:22:29.420601 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:22:29.420612 | orchestrator | 2026-04-05 05:22:29.420622 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-05 05:22:29.420640 | orchestrator | Sunday 05 April 2026 05:22:16 +0000 (0:00:01.153) 0:08:53.471 ********** 2026-04-05 05:22:29.420650 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-05 05:22:29.420662 | orchestrator | 2026-04-05 05:22:29.420673 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-05 05:22:29.420683 | orchestrator | Sunday 05 April 2026 05:22:18 +0000 (0:00:01.570) 0:08:55.042 ********** 2026-04-05 05:22:29.420694 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-05 05:22:29.420704 | orchestrator | 2026-04-05 05:22:29.420715 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-05 05:22:29.420725 | orchestrator | Sunday 05 April 2026 05:22:21 +0000 (0:00:03.228) 0:08:58.271 ********** 2026-04-05 05:22:29.420736 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:22:29.420746 | orchestrator | 2026-04-05 05:22:29.420757 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-05 05:22:29.420767 | orchestrator | Sunday 05 April 2026 05:22:22 +0000 (0:00:01.217) 0:08:59.488 ********** 2026-04-05 05:22:29.420778 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:22:29.420789 | orchestrator | 2026-04-05 05:22:29.420805 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-05 05:22:29.420817 | orchestrator | Sunday 05 April 2026 05:22:23 +0000 (0:00:01.210) 0:09:00.699 ********** 2026-04-05 05:22:29.420827 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:22:29.420838 | orchestrator | 2026-04-05 05:22:29.420848 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-05 05:22:29.420859 | orchestrator | Sunday 05 April 2026 05:22:25 +0000 (0:00:01.192) 0:09:01.891 ********** 2026-04-05 05:22:29.420869 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:22:29.420880 | orchestrator | 2026-04-05 05:22:29.420891 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-05 05:22:29.420901 | orchestrator | Sunday 05 April 2026 05:22:27 +0000 (0:00:02.154) 0:09:04.045 ********** 2026-04-05 05:22:29.420912 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:22:29.420923 | orchestrator | 2026-04-05 05:22:29.420933 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-05 05:22:29.420944 | orchestrator | Sunday 05 April 2026 05:22:28 +0000 (0:00:01.568) 0:09:05.614 ********** 2026-04-05 05:22:29.420955 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:22:29.420965 | orchestrator | 2026-04-05 05:22:29.420982 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-05 05:23:28.344840 | orchestrator | Sunday 05 April 2026 05:22:30 +0000 (0:00:01.530) 0:09:07.145 ********** 2026-04-05 05:23:28.344954 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.344971 | orchestrator | 2026-04-05 05:23:28.344984 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-05 05:23:28.344996 | orchestrator | Sunday 05 April 2026 05:22:31 +0000 (0:00:01.549) 0:09:08.695 ********** 2026-04-05 05:23:28.345007 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345018 | orchestrator | 2026-04-05 05:23:28.345029 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-05 05:23:28.345040 | orchestrator | Sunday 05 April 2026 05:22:33 +0000 (0:00:01.808) 0:09:10.504 ********** 2026-04-05 05:23:28.345051 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345061 | orchestrator | 2026-04-05 05:23:28.345072 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-05 05:23:28.345083 | orchestrator | Sunday 05 April 2026 05:22:35 +0000 (0:00:01.720) 0:09:12.225 ********** 2026-04-05 05:23:28.345094 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 05:23:28.345105 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 05:23:28.345116 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 05:23:28.345126 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-05 05:23:28.345137 | orchestrator | 2026-04-05 05:23:28.345148 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-05 05:23:28.345182 | orchestrator | Sunday 05 April 2026 05:22:39 +0000 (0:00:03.857) 0:09:16.082 ********** 2026-04-05 05:23:28.345194 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:23:28.345204 | orchestrator | 2026-04-05 05:23:28.345215 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-05 05:23:28.345226 | orchestrator | Sunday 05 April 2026 05:22:41 +0000 (0:00:02.105) 0:09:18.188 ********** 2026-04-05 05:23:28.345236 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345247 | orchestrator | 2026-04-05 05:23:28.345257 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-05 05:23:28.345268 | orchestrator | Sunday 05 April 2026 05:22:42 +0000 (0:00:01.160) 0:09:19.349 ********** 2026-04-05 05:23:28.345278 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345289 | orchestrator | 2026-04-05 05:23:28.345328 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-05 05:23:28.345349 | orchestrator | Sunday 05 April 2026 05:22:43 +0000 (0:00:01.143) 0:09:20.493 ********** 2026-04-05 05:23:28.345369 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345388 | orchestrator | 2026-04-05 05:23:28.345405 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-05 05:23:28.345418 | orchestrator | Sunday 05 April 2026 05:22:46 +0000 (0:00:02.320) 0:09:22.813 ********** 2026-04-05 05:23:28.345431 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345444 | orchestrator | 2026-04-05 05:23:28.345457 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-05 05:23:28.345469 | orchestrator | Sunday 05 April 2026 05:22:47 +0000 (0:00:01.622) 0:09:24.436 ********** 2026-04-05 05:23:28.345482 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:23:28.345494 | orchestrator | 2026-04-05 05:23:28.345506 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-05 05:23:28.345519 | orchestrator | Sunday 05 April 2026 05:22:48 +0000 (0:00:01.117) 0:09:25.554 ********** 2026-04-05 05:23:28.345531 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-05 05:23:28.345544 | orchestrator | 2026-04-05 05:23:28.345556 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-05 05:23:28.345568 | orchestrator | Sunday 05 April 2026 05:22:50 +0000 (0:00:01.554) 0:09:27.109 ********** 2026-04-05 05:23:28.345581 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:23:28.345594 | orchestrator | 2026-04-05 05:23:28.345606 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-05 05:23:28.345618 | orchestrator | Sunday 05 April 2026 05:22:51 +0000 (0:00:01.134) 0:09:28.244 ********** 2026-04-05 05:23:28.345630 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:23:28.345643 | orchestrator | 2026-04-05 05:23:28.345655 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-05 05:23:28.345669 | orchestrator | Sunday 05 April 2026 05:22:52 +0000 (0:00:01.119) 0:09:29.363 ********** 2026-04-05 05:23:28.345682 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-05 05:23:28.345695 | orchestrator | 2026-04-05 05:23:28.345708 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-05 05:23:28.345719 | orchestrator | Sunday 05 April 2026 05:22:54 +0000 (0:00:01.481) 0:09:30.845 ********** 2026-04-05 05:23:28.345729 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345740 | orchestrator | 2026-04-05 05:23:28.345750 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-05 05:23:28.345775 | orchestrator | Sunday 05 April 2026 05:22:56 +0000 (0:00:02.327) 0:09:33.172 ********** 2026-04-05 05:23:28.345786 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345796 | orchestrator | 2026-04-05 05:23:28.345807 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-05 05:23:28.345818 | orchestrator | Sunday 05 April 2026 05:22:58 +0000 (0:00:01.916) 0:09:35.089 ********** 2026-04-05 05:23:28.345828 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345847 | orchestrator | 2026-04-05 05:23:28.345858 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-05 05:23:28.345868 | orchestrator | Sunday 05 April 2026 05:23:00 +0000 (0:00:02.472) 0:09:37.562 ********** 2026-04-05 05:23:28.345879 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:23:28.345890 | orchestrator | 2026-04-05 05:23:28.345900 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-05 05:23:28.345911 | orchestrator | Sunday 05 April 2026 05:23:04 +0000 (0:00:03.211) 0:09:40.774 ********** 2026-04-05 05:23:28.345922 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-05 05:23:28.345932 | orchestrator | 2026-04-05 05:23:28.345960 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-05 05:23:28.345972 | orchestrator | Sunday 05 April 2026 05:23:05 +0000 (0:00:01.707) 0:09:42.481 ********** 2026-04-05 05:23:28.345982 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.345993 | orchestrator | 2026-04-05 05:23:28.346003 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-05 05:23:28.346073 | orchestrator | Sunday 05 April 2026 05:23:08 +0000 (0:00:02.246) 0:09:44.731 ********** 2026-04-05 05:23:28.346088 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:28.346099 | orchestrator | 2026-04-05 05:23:28.346109 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-05 05:23:28.346120 | orchestrator | Sunday 05 April 2026 05:23:11 +0000 (0:00:03.046) 0:09:47.777 ********** 2026-04-05 05:23:28.346131 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:23:28.346141 | orchestrator | 2026-04-05 05:23:28.346152 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-05 05:23:28.346162 | orchestrator | Sunday 05 April 2026 05:23:12 +0000 (0:00:01.149) 0:09:48.927 ********** 2026-04-05 05:23:28.346175 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:23:28.346189 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:23:28.346201 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 05:23:28.346212 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 05:23:28.346224 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 05:23:28.346236 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}])  2026-04-05 05:23:28.346256 | orchestrator | 2026-04-05 05:23:28.346267 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-05 05:23:28.346278 | orchestrator | Sunday 05 April 2026 05:23:22 +0000 (0:00:09.851) 0:09:58.778 ********** 2026-04-05 05:23:28.346288 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:23:28.346299 | orchestrator | 2026-04-05 05:23:28.346336 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:23:28.346353 | orchestrator | Sunday 05 April 2026 05:23:24 +0000 (0:00:02.465) 0:10:01.244 ********** 2026-04-05 05:23:28.346364 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:23:28.346375 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 05:23:28.346385 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 05:23:28.346396 | orchestrator | 2026-04-05 05:23:28.346406 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:23:28.346416 | orchestrator | Sunday 05 April 2026 05:23:26 +0000 (0:00:02.367) 0:10:03.611 ********** 2026-04-05 05:23:28.346427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:23:28.346438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:23:28.346449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:23:28.346459 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:23:28.346470 | orchestrator | 2026-04-05 05:23:28.346480 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-05 05:23:28.346498 | orchestrator | Sunday 05 April 2026 05:23:28 +0000 (0:00:01.435) 0:10:05.047 ********** 2026-04-05 05:23:56.867477 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:23:56.867618 | orchestrator | 2026-04-05 05:23:56.867646 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-05 05:23:56.867668 | orchestrator | Sunday 05 April 2026 05:23:29 +0000 (0:00:01.137) 0:10:06.185 ********** 2026-04-05 05:23:56.867692 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:23:56.867713 | orchestrator | 2026-04-05 05:23:56.867734 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-05 05:23:56.867751 | orchestrator | 2026-04-05 05:23:56.867762 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-05 05:23:56.867773 | orchestrator | Sunday 05 April 2026 05:23:31 +0000 (0:00:02.203) 0:10:08.389 ********** 2026-04-05 05:23:56.867784 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.867795 | orchestrator | 2026-04-05 05:23:56.867806 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-05 05:23:56.867818 | orchestrator | Sunday 05 April 2026 05:23:32 +0000 (0:00:01.167) 0:10:09.557 ********** 2026-04-05 05:23:56.867837 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.867854 | orchestrator | 2026-04-05 05:23:56.867872 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-05 05:23:56.867890 | orchestrator | Sunday 05 April 2026 05:23:33 +0000 (0:00:00.810) 0:10:10.367 ********** 2026-04-05 05:23:56.867908 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:23:56.867929 | orchestrator | 2026-04-05 05:23:56.867949 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-05 05:23:56.867968 | orchestrator | Sunday 05 April 2026 05:23:34 +0000 (0:00:00.806) 0:10:11.174 ********** 2026-04-05 05:23:56.867988 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868006 | orchestrator | 2026-04-05 05:23:56.868026 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:23:56.868046 | orchestrator | Sunday 05 April 2026 05:23:35 +0000 (0:00:00.795) 0:10:11.969 ********** 2026-04-05 05:23:56.868066 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-05 05:23:56.868122 | orchestrator | 2026-04-05 05:23:56.868143 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:23:56.868171 | orchestrator | Sunday 05 April 2026 05:23:36 +0000 (0:00:01.096) 0:10:13.066 ********** 2026-04-05 05:23:56.868190 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868208 | orchestrator | 2026-04-05 05:23:56.868225 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:23:56.868242 | orchestrator | Sunday 05 April 2026 05:23:37 +0000 (0:00:01.486) 0:10:14.553 ********** 2026-04-05 05:23:56.868260 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868277 | orchestrator | 2026-04-05 05:23:56.868295 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:23:56.868312 | orchestrator | Sunday 05 April 2026 05:23:38 +0000 (0:00:01.126) 0:10:15.680 ********** 2026-04-05 05:23:56.868330 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868375 | orchestrator | 2026-04-05 05:23:56.868395 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:23:56.868413 | orchestrator | Sunday 05 April 2026 05:23:40 +0000 (0:00:01.497) 0:10:17.177 ********** 2026-04-05 05:23:56.868431 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868448 | orchestrator | 2026-04-05 05:23:56.868465 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:23:56.868482 | orchestrator | Sunday 05 April 2026 05:23:41 +0000 (0:00:01.166) 0:10:18.344 ********** 2026-04-05 05:23:56.868498 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868515 | orchestrator | 2026-04-05 05:23:56.868532 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:23:56.868550 | orchestrator | Sunday 05 April 2026 05:23:42 +0000 (0:00:01.185) 0:10:19.529 ********** 2026-04-05 05:23:56.868568 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868585 | orchestrator | 2026-04-05 05:23:56.868602 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:23:56.868622 | orchestrator | Sunday 05 April 2026 05:23:43 +0000 (0:00:01.137) 0:10:20.667 ********** 2026-04-05 05:23:56.868640 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:23:56.868658 | orchestrator | 2026-04-05 05:23:56.868676 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:23:56.868693 | orchestrator | Sunday 05 April 2026 05:23:45 +0000 (0:00:01.134) 0:10:21.801 ********** 2026-04-05 05:23:56.868712 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868730 | orchestrator | 2026-04-05 05:23:56.868748 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:23:56.868767 | orchestrator | Sunday 05 April 2026 05:23:46 +0000 (0:00:01.188) 0:10:22.990 ********** 2026-04-05 05:23:56.868785 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:23:56.868805 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:23:56.868843 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:23:56.868862 | orchestrator | 2026-04-05 05:23:56.868880 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:23:56.868897 | orchestrator | Sunday 05 April 2026 05:23:48 +0000 (0:00:01.809) 0:10:24.799 ********** 2026-04-05 05:23:56.868914 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:23:56.868932 | orchestrator | 2026-04-05 05:23:56.868948 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:23:56.868964 | orchestrator | Sunday 05 April 2026 05:23:49 +0000 (0:00:01.320) 0:10:26.121 ********** 2026-04-05 05:23:56.868983 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:23:56.869000 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:23:56.869018 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:23:56.869037 | orchestrator | 2026-04-05 05:23:56.869057 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:23:56.869094 | orchestrator | Sunday 05 April 2026 05:23:52 +0000 (0:00:03.050) 0:10:29.172 ********** 2026-04-05 05:23:56.869143 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:23:56.869165 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:23:56.869184 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:23:56.869202 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:23:56.869213 | orchestrator | 2026-04-05 05:23:56.869224 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:23:56.869235 | orchestrator | Sunday 05 April 2026 05:23:53 +0000 (0:00:01.408) 0:10:30.580 ********** 2026-04-05 05:23:56.869248 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:23:56.869262 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:23:56.869273 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:23:56.869284 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:23:56.869295 | orchestrator | 2026-04-05 05:23:56.869306 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:23:56.869316 | orchestrator | Sunday 05 April 2026 05:23:55 +0000 (0:00:01.673) 0:10:32.253 ********** 2026-04-05 05:23:56.869329 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:23:56.869344 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:23:56.869385 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:23:56.869396 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:23:56.869407 | orchestrator | 2026-04-05 05:23:56.869418 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:23:56.869429 | orchestrator | Sunday 05 April 2026 05:23:56 +0000 (0:00:01.218) 0:10:33.472 ********** 2026-04-05 05:23:56.869450 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:23:49.945933', 'end': '2026-04-05 05:23:49.997571', 'delta': '0:00:00.051638', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:23:56.869487 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '0027b45af4f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:23:50.509106', 'end': '2026-04-05 05:23:50.568518', 'delta': '0:00:00.059412', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0027b45af4f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:24:16.225712 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd0e8f8775caf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:23:51.125698', 'end': '2026-04-05 05:23:51.176485', 'delta': '0:00:00.050787', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0e8f8775caf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:24:16.225867 | orchestrator | 2026-04-05 05:24:16.225900 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:24:16.225923 | orchestrator | Sunday 05 April 2026 05:23:58 +0000 (0:00:01.263) 0:10:34.735 ********** 2026-04-05 05:24:16.225943 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:24:16.225962 | orchestrator | 2026-04-05 05:24:16.225981 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:24:16.226000 | orchestrator | Sunday 05 April 2026 05:23:59 +0000 (0:00:01.277) 0:10:36.013 ********** 2026-04-05 05:24:16.226086 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226101 | orchestrator | 2026-04-05 05:24:16.226113 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:24:16.226124 | orchestrator | Sunday 05 April 2026 05:24:00 +0000 (0:00:01.272) 0:10:37.285 ********** 2026-04-05 05:24:16.226135 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:24:16.226146 | orchestrator | 2026-04-05 05:24:16.226157 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:24:16.226168 | orchestrator | Sunday 05 April 2026 05:24:01 +0000 (0:00:01.172) 0:10:38.458 ********** 2026-04-05 05:24:16.226179 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-05 05:24:16.226190 | orchestrator | 2026-04-05 05:24:16.226201 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:24:16.226215 | orchestrator | Sunday 05 April 2026 05:24:03 +0000 (0:00:02.181) 0:10:40.639 ********** 2026-04-05 05:24:16.226228 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:24:16.226240 | orchestrator | 2026-04-05 05:24:16.226253 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:24:16.226266 | orchestrator | Sunday 05 April 2026 05:24:05 +0000 (0:00:01.168) 0:10:41.808 ********** 2026-04-05 05:24:16.226278 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226290 | orchestrator | 2026-04-05 05:24:16.226304 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:24:16.226317 | orchestrator | Sunday 05 April 2026 05:24:06 +0000 (0:00:01.248) 0:10:43.056 ********** 2026-04-05 05:24:16.226330 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226343 | orchestrator | 2026-04-05 05:24:16.226356 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:24:16.226421 | orchestrator | Sunday 05 April 2026 05:24:07 +0000 (0:00:01.625) 0:10:44.682 ********** 2026-04-05 05:24:16.226436 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226448 | orchestrator | 2026-04-05 05:24:16.226460 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:24:16.226473 | orchestrator | Sunday 05 April 2026 05:24:09 +0000 (0:00:01.157) 0:10:45.840 ********** 2026-04-05 05:24:16.226486 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226498 | orchestrator | 2026-04-05 05:24:16.226510 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:24:16.226523 | orchestrator | Sunday 05 April 2026 05:24:10 +0000 (0:00:01.204) 0:10:47.044 ********** 2026-04-05 05:24:16.226536 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226549 | orchestrator | 2026-04-05 05:24:16.226561 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:24:16.226573 | orchestrator | Sunday 05 April 2026 05:24:11 +0000 (0:00:01.165) 0:10:48.210 ********** 2026-04-05 05:24:16.226584 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226594 | orchestrator | 2026-04-05 05:24:16.226605 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:24:16.226630 | orchestrator | Sunday 05 April 2026 05:24:12 +0000 (0:00:01.138) 0:10:49.349 ********** 2026-04-05 05:24:16.226641 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226651 | orchestrator | 2026-04-05 05:24:16.226662 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:24:16.226673 | orchestrator | Sunday 05 April 2026 05:24:13 +0000 (0:00:01.147) 0:10:50.496 ********** 2026-04-05 05:24:16.226684 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226694 | orchestrator | 2026-04-05 05:24:16.226705 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:24:16.226717 | orchestrator | Sunday 05 April 2026 05:24:14 +0000 (0:00:01.136) 0:10:51.633 ********** 2026-04-05 05:24:16.226727 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:16.226738 | orchestrator | 2026-04-05 05:24:16.226748 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:24:16.226759 | orchestrator | Sunday 05 April 2026 05:24:16 +0000 (0:00:01.144) 0:10:52.778 ********** 2026-04-05 05:24:16.226794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:16.226809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:16.226821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:16.226833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:24:16.226854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:16.226865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:16.226877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:16.226908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57f1796b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:24:17.483353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:17.483574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:24:17.483600 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:17.483615 | orchestrator | 2026-04-05 05:24:17.483627 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:24:17.483639 | orchestrator | Sunday 05 April 2026 05:24:17 +0000 (0:00:01.289) 0:10:54.068 ********** 2026-04-05 05:24:17.483653 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483668 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483695 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483708 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483823 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483851 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483863 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483888 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57f1796b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:17.483916 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:53.133330 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:24:53.133492 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.133512 | orchestrator | 2026-04-05 05:24:53.133524 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:24:53.133537 | orchestrator | Sunday 05 April 2026 05:24:18 +0000 (0:00:01.193) 0:10:55.261 ********** 2026-04-05 05:24:53.133548 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:24:53.133559 | orchestrator | 2026-04-05 05:24:53.133571 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:24:53.133582 | orchestrator | Sunday 05 April 2026 05:24:20 +0000 (0:00:01.559) 0:10:56.820 ********** 2026-04-05 05:24:53.133592 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:24:53.133603 | orchestrator | 2026-04-05 05:24:53.133614 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:24:53.133625 | orchestrator | Sunday 05 April 2026 05:24:21 +0000 (0:00:01.154) 0:10:57.975 ********** 2026-04-05 05:24:53.133636 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:24:53.133647 | orchestrator | 2026-04-05 05:24:53.133658 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:24:53.133668 | orchestrator | Sunday 05 April 2026 05:24:22 +0000 (0:00:01.486) 0:10:59.461 ********** 2026-04-05 05:24:53.133679 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.133690 | orchestrator | 2026-04-05 05:24:53.133701 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:24:53.133712 | orchestrator | Sunday 05 April 2026 05:24:24 +0000 (0:00:01.346) 0:11:00.808 ********** 2026-04-05 05:24:53.133723 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.133733 | orchestrator | 2026-04-05 05:24:53.133744 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:24:53.133755 | orchestrator | Sunday 05 April 2026 05:24:25 +0000 (0:00:01.256) 0:11:02.064 ********** 2026-04-05 05:24:53.133766 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.133777 | orchestrator | 2026-04-05 05:24:53.133787 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:24:53.133814 | orchestrator | Sunday 05 April 2026 05:24:26 +0000 (0:00:01.158) 0:11:03.223 ********** 2026-04-05 05:24:53.133825 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-05 05:24:53.133837 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:24:53.133847 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-05 05:24:53.133858 | orchestrator | 2026-04-05 05:24:53.133872 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:24:53.133884 | orchestrator | Sunday 05 April 2026 05:24:28 +0000 (0:00:01.682) 0:11:04.906 ********** 2026-04-05 05:24:53.133897 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:24:53.133910 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:24:53.133923 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:24:53.133936 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.133971 | orchestrator | 2026-04-05 05:24:53.133984 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:24:53.133997 | orchestrator | Sunday 05 April 2026 05:24:29 +0000 (0:00:01.137) 0:11:06.044 ********** 2026-04-05 05:24:53.134010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134076 | orchestrator | 2026-04-05 05:24:53.134090 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:24:53.134102 | orchestrator | Sunday 05 April 2026 05:24:30 +0000 (0:00:01.177) 0:11:07.221 ********** 2026-04-05 05:24:53.134115 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:24:53.134162 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:24:53.134176 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:24:53.134189 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:24:53.134202 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:24:53.134215 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:24:53.134226 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:24:53.134237 | orchestrator | 2026-04-05 05:24:53.134247 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:24:53.134272 | orchestrator | Sunday 05 April 2026 05:24:32 +0000 (0:00:01.867) 0:11:09.089 ********** 2026-04-05 05:24:53.134294 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:24:53.134305 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:24:53.134316 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:24:53.134327 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:24:53.134355 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:24:53.134367 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:24:53.134378 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:24:53.134389 | orchestrator | 2026-04-05 05:24:53.134399 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-05 05:24:53.134410 | orchestrator | Sunday 05 April 2026 05:24:34 +0000 (0:00:02.292) 0:11:11.382 ********** 2026-04-05 05:24:53.134420 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134431 | orchestrator | 2026-04-05 05:24:53.134464 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-05 05:24:53.134475 | orchestrator | Sunday 05 April 2026 05:24:35 +0000 (0:00:00.874) 0:11:12.256 ********** 2026-04-05 05:24:53.134486 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134497 | orchestrator | 2026-04-05 05:24:53.134508 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-05 05:24:53.134519 | orchestrator | Sunday 05 April 2026 05:24:36 +0000 (0:00:00.903) 0:11:13.160 ********** 2026-04-05 05:24:53.134529 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134540 | orchestrator | 2026-04-05 05:24:53.134551 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-05 05:24:53.134562 | orchestrator | Sunday 05 April 2026 05:24:37 +0000 (0:00:00.770) 0:11:13.931 ********** 2026-04-05 05:24:53.134573 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134583 | orchestrator | 2026-04-05 05:24:53.134594 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-05 05:24:53.134605 | orchestrator | Sunday 05 April 2026 05:24:38 +0000 (0:00:00.939) 0:11:14.870 ********** 2026-04-05 05:24:53.134616 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134626 | orchestrator | 2026-04-05 05:24:53.134637 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-05 05:24:53.134658 | orchestrator | Sunday 05 April 2026 05:24:39 +0000 (0:00:00.880) 0:11:15.751 ********** 2026-04-05 05:24:53.134669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:24:53.134680 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:24:53.134691 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:24:53.134701 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134712 | orchestrator | 2026-04-05 05:24:53.134723 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-05 05:24:53.134734 | orchestrator | Sunday 05 April 2026 05:24:40 +0000 (0:00:01.105) 0:11:16.856 ********** 2026-04-05 05:24:53.134744 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-05 05:24:53.134755 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-05 05:24:53.134772 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-05 05:24:53.134783 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-05 05:24:53.134794 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-05 05:24:53.134805 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-05 05:24:53.134816 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.134827 | orchestrator | 2026-04-05 05:24:53.134837 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-05 05:24:53.134848 | orchestrator | Sunday 05 April 2026 05:24:41 +0000 (0:00:01.326) 0:11:18.182 ********** 2026-04-05 05:24:53.134859 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:24:53.134869 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:24:53.134880 | orchestrator | 2026-04-05 05:24:53.134891 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-05 05:24:53.134902 | orchestrator | Sunday 05 April 2026 05:24:44 +0000 (0:00:03.273) 0:11:21.456 ********** 2026-04-05 05:24:53.134913 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:24:53.134923 | orchestrator | 2026-04-05 05:24:53.134934 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:24:53.134945 | orchestrator | Sunday 05 April 2026 05:24:46 +0000 (0:00:02.184) 0:11:23.640 ********** 2026-04-05 05:24:53.134955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-05 05:24:53.134967 | orchestrator | 2026-04-05 05:24:53.134978 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:24:53.134989 | orchestrator | Sunday 05 April 2026 05:24:48 +0000 (0:00:01.156) 0:11:24.796 ********** 2026-04-05 05:24:53.134999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-05 05:24:53.135013 | orchestrator | 2026-04-05 05:24:53.135031 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:24:53.135050 | orchestrator | Sunday 05 April 2026 05:24:49 +0000 (0:00:01.153) 0:11:25.950 ********** 2026-04-05 05:24:53.135069 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:24:53.135086 | orchestrator | 2026-04-05 05:24:53.135104 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:24:53.135121 | orchestrator | Sunday 05 April 2026 05:24:50 +0000 (0:00:01.585) 0:11:27.536 ********** 2026-04-05 05:24:53.135138 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.135155 | orchestrator | 2026-04-05 05:24:53.135173 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:24:53.135191 | orchestrator | Sunday 05 April 2026 05:24:52 +0000 (0:00:01.188) 0:11:28.724 ********** 2026-04-05 05:24:53.135209 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:24:53.135228 | orchestrator | 2026-04-05 05:24:53.135243 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:24:53.135270 | orchestrator | Sunday 05 April 2026 05:24:53 +0000 (0:00:01.112) 0:11:29.836 ********** 2026-04-05 05:25:35.023480 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.023642 | orchestrator | 2026-04-05 05:25:35.023662 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:25:35.023675 | orchestrator | Sunday 05 April 2026 05:24:54 +0000 (0:00:01.130) 0:11:30.967 ********** 2026-04-05 05:25:35.023686 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.023699 | orchestrator | 2026-04-05 05:25:35.023710 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:25:35.023721 | orchestrator | Sunday 05 April 2026 05:24:55 +0000 (0:00:01.574) 0:11:32.542 ********** 2026-04-05 05:25:35.023732 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.023744 | orchestrator | 2026-04-05 05:25:35.023755 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:25:35.023766 | orchestrator | Sunday 05 April 2026 05:24:56 +0000 (0:00:01.139) 0:11:33.681 ********** 2026-04-05 05:25:35.023777 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.023788 | orchestrator | 2026-04-05 05:25:35.023799 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:25:35.023810 | orchestrator | Sunday 05 April 2026 05:24:58 +0000 (0:00:01.144) 0:11:34.826 ********** 2026-04-05 05:25:35.023821 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.023832 | orchestrator | 2026-04-05 05:25:35.023843 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:25:35.023854 | orchestrator | Sunday 05 April 2026 05:24:59 +0000 (0:00:01.575) 0:11:36.402 ********** 2026-04-05 05:25:35.023865 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.023876 | orchestrator | 2026-04-05 05:25:35.023888 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:25:35.023899 | orchestrator | Sunday 05 April 2026 05:25:01 +0000 (0:00:01.686) 0:11:38.088 ********** 2026-04-05 05:25:35.023910 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.023920 | orchestrator | 2026-04-05 05:25:35.023931 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:25:35.023942 | orchestrator | Sunday 05 April 2026 05:25:02 +0000 (0:00:00.763) 0:11:38.851 ********** 2026-04-05 05:25:35.023953 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.023964 | orchestrator | 2026-04-05 05:25:35.023975 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:25:35.023986 | orchestrator | Sunday 05 April 2026 05:25:02 +0000 (0:00:00.823) 0:11:39.675 ********** 2026-04-05 05:25:35.023997 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024008 | orchestrator | 2026-04-05 05:25:35.024022 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:25:35.024035 | orchestrator | Sunday 05 April 2026 05:25:03 +0000 (0:00:00.771) 0:11:40.446 ********** 2026-04-05 05:25:35.024048 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024061 | orchestrator | 2026-04-05 05:25:35.024075 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:25:35.024088 | orchestrator | Sunday 05 April 2026 05:25:04 +0000 (0:00:00.770) 0:11:41.217 ********** 2026-04-05 05:25:35.024118 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024131 | orchestrator | 2026-04-05 05:25:35.024142 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:25:35.024153 | orchestrator | Sunday 05 April 2026 05:25:05 +0000 (0:00:00.753) 0:11:41.971 ********** 2026-04-05 05:25:35.024164 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024175 | orchestrator | 2026-04-05 05:25:35.024186 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:25:35.024196 | orchestrator | Sunday 05 April 2026 05:25:05 +0000 (0:00:00.740) 0:11:42.711 ********** 2026-04-05 05:25:35.024208 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024218 | orchestrator | 2026-04-05 05:25:35.024229 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:25:35.024265 | orchestrator | Sunday 05 April 2026 05:25:06 +0000 (0:00:00.742) 0:11:43.454 ********** 2026-04-05 05:25:35.024277 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.024288 | orchestrator | 2026-04-05 05:25:35.024299 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:25:35.024309 | orchestrator | Sunday 05 April 2026 05:25:07 +0000 (0:00:00.883) 0:11:44.337 ********** 2026-04-05 05:25:35.024320 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.024331 | orchestrator | 2026-04-05 05:25:35.024342 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:25:35.024352 | orchestrator | Sunday 05 April 2026 05:25:08 +0000 (0:00:00.782) 0:11:45.120 ********** 2026-04-05 05:25:35.024363 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.024374 | orchestrator | 2026-04-05 05:25:35.024384 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:25:35.024395 | orchestrator | Sunday 05 April 2026 05:25:09 +0000 (0:00:00.867) 0:11:45.988 ********** 2026-04-05 05:25:35.024406 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024416 | orchestrator | 2026-04-05 05:25:35.024427 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:25:35.024438 | orchestrator | Sunday 05 April 2026 05:25:10 +0000 (0:00:00.773) 0:11:46.761 ********** 2026-04-05 05:25:35.024449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024459 | orchestrator | 2026-04-05 05:25:35.024470 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:25:35.024480 | orchestrator | Sunday 05 April 2026 05:25:10 +0000 (0:00:00.759) 0:11:47.521 ********** 2026-04-05 05:25:35.024491 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024520 | orchestrator | 2026-04-05 05:25:35.024531 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:25:35.024542 | orchestrator | Sunday 05 April 2026 05:25:11 +0000 (0:00:00.782) 0:11:48.303 ********** 2026-04-05 05:25:35.024553 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024564 | orchestrator | 2026-04-05 05:25:35.024574 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:25:35.024585 | orchestrator | Sunday 05 April 2026 05:25:12 +0000 (0:00:00.763) 0:11:49.067 ********** 2026-04-05 05:25:35.024596 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024607 | orchestrator | 2026-04-05 05:25:35.024636 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:25:35.024648 | orchestrator | Sunday 05 April 2026 05:25:13 +0000 (0:00:00.800) 0:11:49.868 ********** 2026-04-05 05:25:35.024659 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024670 | orchestrator | 2026-04-05 05:25:35.024680 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:25:35.024691 | orchestrator | Sunday 05 April 2026 05:25:13 +0000 (0:00:00.766) 0:11:50.635 ********** 2026-04-05 05:25:35.024702 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024713 | orchestrator | 2026-04-05 05:25:35.024723 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:25:35.024735 | orchestrator | Sunday 05 April 2026 05:25:14 +0000 (0:00:00.762) 0:11:51.398 ********** 2026-04-05 05:25:35.024746 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024756 | orchestrator | 2026-04-05 05:25:35.024767 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:25:35.024778 | orchestrator | Sunday 05 April 2026 05:25:15 +0000 (0:00:00.821) 0:11:52.220 ********** 2026-04-05 05:25:35.024802 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024813 | orchestrator | 2026-04-05 05:25:35.024824 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:25:35.024834 | orchestrator | Sunday 05 April 2026 05:25:16 +0000 (0:00:00.770) 0:11:52.990 ********** 2026-04-05 05:25:35.024845 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024856 | orchestrator | 2026-04-05 05:25:35.024866 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:25:35.024885 | orchestrator | Sunday 05 April 2026 05:25:17 +0000 (0:00:00.756) 0:11:53.747 ********** 2026-04-05 05:25:35.024896 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024907 | orchestrator | 2026-04-05 05:25:35.024918 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:25:35.024929 | orchestrator | Sunday 05 April 2026 05:25:17 +0000 (0:00:00.767) 0:11:54.514 ********** 2026-04-05 05:25:35.024939 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.024950 | orchestrator | 2026-04-05 05:25:35.024961 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:25:35.024972 | orchestrator | Sunday 05 April 2026 05:25:18 +0000 (0:00:00.890) 0:11:55.405 ********** 2026-04-05 05:25:35.024982 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.024993 | orchestrator | 2026-04-05 05:25:35.025004 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:25:35.025014 | orchestrator | Sunday 05 April 2026 05:25:20 +0000 (0:00:01.729) 0:11:57.135 ********** 2026-04-05 05:25:35.025025 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.025036 | orchestrator | 2026-04-05 05:25:35.025046 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:25:35.025057 | orchestrator | Sunday 05 April 2026 05:25:22 +0000 (0:00:02.058) 0:11:59.193 ********** 2026-04-05 05:25:35.025074 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-05 05:25:35.025085 | orchestrator | 2026-04-05 05:25:35.025096 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:25:35.025107 | orchestrator | Sunday 05 April 2026 05:25:23 +0000 (0:00:01.118) 0:12:00.311 ********** 2026-04-05 05:25:35.025117 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.025128 | orchestrator | 2026-04-05 05:25:35.025149 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:25:35.025160 | orchestrator | Sunday 05 April 2026 05:25:24 +0000 (0:00:01.131) 0:12:01.443 ********** 2026-04-05 05:25:35.025171 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.025186 | orchestrator | 2026-04-05 05:25:35.025203 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:25:35.025221 | orchestrator | Sunday 05 April 2026 05:25:25 +0000 (0:00:01.165) 0:12:02.608 ********** 2026-04-05 05:25:35.025239 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:25:35.025257 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:25:35.025275 | orchestrator | 2026-04-05 05:25:35.025293 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:25:35.025311 | orchestrator | Sunday 05 April 2026 05:25:27 +0000 (0:00:01.900) 0:12:04.508 ********** 2026-04-05 05:25:35.025322 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.025333 | orchestrator | 2026-04-05 05:25:35.025343 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:25:35.025354 | orchestrator | Sunday 05 April 2026 05:25:29 +0000 (0:00:01.462) 0:12:05.971 ********** 2026-04-05 05:25:35.025365 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.025376 | orchestrator | 2026-04-05 05:25:35.025386 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:25:35.025397 | orchestrator | Sunday 05 April 2026 05:25:30 +0000 (0:00:01.188) 0:12:07.159 ********** 2026-04-05 05:25:35.025408 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.025418 | orchestrator | 2026-04-05 05:25:35.025429 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:25:35.025440 | orchestrator | Sunday 05 April 2026 05:25:31 +0000 (0:00:00.814) 0:12:07.974 ********** 2026-04-05 05:25:35.025451 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:25:35.025461 | orchestrator | 2026-04-05 05:25:35.025472 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:25:35.025483 | orchestrator | Sunday 05 April 2026 05:25:32 +0000 (0:00:00.763) 0:12:08.738 ********** 2026-04-05 05:25:35.025525 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-05 05:25:35.025537 | orchestrator | 2026-04-05 05:25:35.025547 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:25:35.025558 | orchestrator | Sunday 05 April 2026 05:25:33 +0000 (0:00:01.245) 0:12:09.983 ********** 2026-04-05 05:25:35.025569 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:25:35.025580 | orchestrator | 2026-04-05 05:25:35.025591 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:25:35.025610 | orchestrator | Sunday 05 April 2026 05:25:35 +0000 (0:00:01.744) 0:12:11.728 ********** 2026-04-05 05:26:14.697341 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:26:14.697420 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:26:14.697428 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:26:14.697434 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697439 | orchestrator | 2026-04-05 05:26:14.697444 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:26:14.697449 | orchestrator | Sunday 05 April 2026 05:25:36 +0000 (0:00:01.158) 0:12:12.886 ********** 2026-04-05 05:26:14.697453 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697457 | orchestrator | 2026-04-05 05:26:14.697462 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:26:14.697467 | orchestrator | Sunday 05 April 2026 05:25:37 +0000 (0:00:01.109) 0:12:13.995 ********** 2026-04-05 05:26:14.697471 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697475 | orchestrator | 2026-04-05 05:26:14.697480 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:26:14.697484 | orchestrator | Sunday 05 April 2026 05:25:38 +0000 (0:00:01.207) 0:12:15.202 ********** 2026-04-05 05:26:14.697488 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697492 | orchestrator | 2026-04-05 05:26:14.697497 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:26:14.697501 | orchestrator | Sunday 05 April 2026 05:25:39 +0000 (0:00:01.114) 0:12:16.317 ********** 2026-04-05 05:26:14.697505 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697509 | orchestrator | 2026-04-05 05:26:14.697514 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:26:14.697518 | orchestrator | Sunday 05 April 2026 05:25:40 +0000 (0:00:01.172) 0:12:17.489 ********** 2026-04-05 05:26:14.697522 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697526 | orchestrator | 2026-04-05 05:26:14.697530 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:26:14.697535 | orchestrator | Sunday 05 April 2026 05:25:41 +0000 (0:00:00.818) 0:12:18.308 ********** 2026-04-05 05:26:14.697539 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:26:14.697545 | orchestrator | 2026-04-05 05:26:14.697549 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:26:14.697553 | orchestrator | Sunday 05 April 2026 05:25:43 +0000 (0:00:02.264) 0:12:20.573 ********** 2026-04-05 05:26:14.697585 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:26:14.697590 | orchestrator | 2026-04-05 05:26:14.697594 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:26:14.697598 | orchestrator | Sunday 05 April 2026 05:25:44 +0000 (0:00:00.773) 0:12:21.347 ********** 2026-04-05 05:26:14.697614 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-05 05:26:14.697618 | orchestrator | 2026-04-05 05:26:14.697622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:26:14.697627 | orchestrator | Sunday 05 April 2026 05:25:45 +0000 (0:00:01.099) 0:12:22.446 ********** 2026-04-05 05:26:14.697631 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697651 | orchestrator | 2026-04-05 05:26:14.697655 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:26:14.697660 | orchestrator | Sunday 05 April 2026 05:25:46 +0000 (0:00:01.177) 0:12:23.624 ********** 2026-04-05 05:26:14.697664 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697668 | orchestrator | 2026-04-05 05:26:14.697672 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:26:14.697676 | orchestrator | Sunday 05 April 2026 05:25:48 +0000 (0:00:01.162) 0:12:24.786 ********** 2026-04-05 05:26:14.697681 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697685 | orchestrator | 2026-04-05 05:26:14.697689 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:26:14.697693 | orchestrator | Sunday 05 April 2026 05:25:49 +0000 (0:00:01.126) 0:12:25.913 ********** 2026-04-05 05:26:14.697697 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697702 | orchestrator | 2026-04-05 05:26:14.697706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:26:14.697710 | orchestrator | Sunday 05 April 2026 05:25:50 +0000 (0:00:01.158) 0:12:27.072 ********** 2026-04-05 05:26:14.697714 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697718 | orchestrator | 2026-04-05 05:26:14.697723 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:26:14.697727 | orchestrator | Sunday 05 April 2026 05:25:51 +0000 (0:00:01.157) 0:12:28.229 ********** 2026-04-05 05:26:14.697731 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697735 | orchestrator | 2026-04-05 05:26:14.697739 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:26:14.697744 | orchestrator | Sunday 05 April 2026 05:25:52 +0000 (0:00:01.129) 0:12:29.359 ********** 2026-04-05 05:26:14.697748 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697752 | orchestrator | 2026-04-05 05:26:14.697757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:26:14.697765 | orchestrator | Sunday 05 April 2026 05:25:53 +0000 (0:00:01.166) 0:12:30.525 ********** 2026-04-05 05:26:14.697772 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697779 | orchestrator | 2026-04-05 05:26:14.697787 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:26:14.697792 | orchestrator | Sunday 05 April 2026 05:25:54 +0000 (0:00:01.121) 0:12:31.647 ********** 2026-04-05 05:26:14.697796 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:26:14.697800 | orchestrator | 2026-04-05 05:26:14.697805 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:26:14.697809 | orchestrator | Sunday 05 April 2026 05:25:55 +0000 (0:00:00.868) 0:12:32.516 ********** 2026-04-05 05:26:14.697813 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-05 05:26:14.697818 | orchestrator | 2026-04-05 05:26:14.697823 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:26:14.697837 | orchestrator | Sunday 05 April 2026 05:25:56 +0000 (0:00:01.137) 0:12:33.653 ********** 2026-04-05 05:26:14.697842 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-05 05:26:14.697847 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-05 05:26:14.697851 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-05 05:26:14.697855 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-05 05:26:14.697860 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-05 05:26:14.697864 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-05 05:26:14.697868 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-05 05:26:14.697872 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:26:14.697877 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:26:14.697882 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:26:14.697886 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:26:14.697894 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:26:14.697898 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:26:14.697903 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:26:14.697907 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-05 05:26:14.697912 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-05 05:26:14.697917 | orchestrator | 2026-04-05 05:26:14.697922 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:26:14.697927 | orchestrator | Sunday 05 April 2026 05:26:03 +0000 (0:00:06.636) 0:12:40.290 ********** 2026-04-05 05:26:14.697931 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697936 | orchestrator | 2026-04-05 05:26:14.697941 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:26:14.697946 | orchestrator | Sunday 05 April 2026 05:26:04 +0000 (0:00:00.802) 0:12:41.092 ********** 2026-04-05 05:26:14.697951 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697956 | orchestrator | 2026-04-05 05:26:14.697960 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:26:14.697965 | orchestrator | Sunday 05 April 2026 05:26:05 +0000 (0:00:00.781) 0:12:41.874 ********** 2026-04-05 05:26:14.697970 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697975 | orchestrator | 2026-04-05 05:26:14.697980 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:26:14.697985 | orchestrator | Sunday 05 April 2026 05:26:05 +0000 (0:00:00.767) 0:12:42.641 ********** 2026-04-05 05:26:14.697990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.697995 | orchestrator | 2026-04-05 05:26:14.698003 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:26:14.698009 | orchestrator | Sunday 05 April 2026 05:26:06 +0000 (0:00:00.767) 0:12:43.408 ********** 2026-04-05 05:26:14.698013 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698053 | orchestrator | 2026-04-05 05:26:14.698058 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:26:14.698063 | orchestrator | Sunday 05 April 2026 05:26:07 +0000 (0:00:00.768) 0:12:44.176 ********** 2026-04-05 05:26:14.698068 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698073 | orchestrator | 2026-04-05 05:26:14.698078 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:26:14.698083 | orchestrator | Sunday 05 April 2026 05:26:08 +0000 (0:00:00.803) 0:12:44.980 ********** 2026-04-05 05:26:14.698088 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698094 | orchestrator | 2026-04-05 05:26:14.698099 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:26:14.698104 | orchestrator | Sunday 05 April 2026 05:26:09 +0000 (0:00:00.783) 0:12:45.763 ********** 2026-04-05 05:26:14.698109 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698114 | orchestrator | 2026-04-05 05:26:14.698119 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:26:14.698124 | orchestrator | Sunday 05 April 2026 05:26:09 +0000 (0:00:00.791) 0:12:46.555 ********** 2026-04-05 05:26:14.698129 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698135 | orchestrator | 2026-04-05 05:26:14.698140 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:26:14.698145 | orchestrator | Sunday 05 April 2026 05:26:10 +0000 (0:00:00.798) 0:12:47.354 ********** 2026-04-05 05:26:14.698150 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698154 | orchestrator | 2026-04-05 05:26:14.698159 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:26:14.698163 | orchestrator | Sunday 05 April 2026 05:26:11 +0000 (0:00:00.774) 0:12:48.128 ********** 2026-04-05 05:26:14.698167 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698175 | orchestrator | 2026-04-05 05:26:14.698179 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:26:14.698184 | orchestrator | Sunday 05 April 2026 05:26:12 +0000 (0:00:00.799) 0:12:48.928 ********** 2026-04-05 05:26:14.698188 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698192 | orchestrator | 2026-04-05 05:26:14.698196 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:26:14.698201 | orchestrator | Sunday 05 April 2026 05:26:13 +0000 (0:00:00.789) 0:12:49.717 ********** 2026-04-05 05:26:14.698205 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698209 | orchestrator | 2026-04-05 05:26:14.698213 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:26:14.698218 | orchestrator | Sunday 05 April 2026 05:26:13 +0000 (0:00:00.854) 0:12:50.572 ********** 2026-04-05 05:26:14.698222 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:26:14.698226 | orchestrator | 2026-04-05 05:26:14.698230 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:26:14.698238 | orchestrator | Sunday 05 April 2026 05:26:14 +0000 (0:00:00.830) 0:12:51.403 ********** 2026-04-05 05:27:02.957133 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957250 | orchestrator | 2026-04-05 05:27:02.957267 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:27:02.957280 | orchestrator | Sunday 05 April 2026 05:26:15 +0000 (0:00:00.858) 0:12:52.262 ********** 2026-04-05 05:27:02.957291 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957302 | orchestrator | 2026-04-05 05:27:02.957313 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:27:02.957324 | orchestrator | Sunday 05 April 2026 05:26:16 +0000 (0:00:00.775) 0:12:53.038 ********** 2026-04-05 05:27:02.957335 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957346 | orchestrator | 2026-04-05 05:27:02.957358 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:27:02.957370 | orchestrator | Sunday 05 April 2026 05:26:17 +0000 (0:00:00.745) 0:12:53.783 ********** 2026-04-05 05:27:02.957381 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957392 | orchestrator | 2026-04-05 05:27:02.957403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:27:02.957414 | orchestrator | Sunday 05 April 2026 05:26:17 +0000 (0:00:00.786) 0:12:54.570 ********** 2026-04-05 05:27:02.957425 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957435 | orchestrator | 2026-04-05 05:27:02.957446 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:27:02.957457 | orchestrator | Sunday 05 April 2026 05:26:18 +0000 (0:00:00.801) 0:12:55.371 ********** 2026-04-05 05:27:02.957468 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957478 | orchestrator | 2026-04-05 05:27:02.957489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:27:02.957500 | orchestrator | Sunday 05 April 2026 05:26:19 +0000 (0:00:00.784) 0:12:56.155 ********** 2026-04-05 05:27:02.957511 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957521 | orchestrator | 2026-04-05 05:27:02.957532 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:27:02.957543 | orchestrator | Sunday 05 April 2026 05:26:20 +0000 (0:00:00.792) 0:12:56.948 ********** 2026-04-05 05:27:02.957554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:27:02.957565 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:27:02.957576 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:27:02.957586 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957597 | orchestrator | 2026-04-05 05:27:02.957608 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:27:02.957659 | orchestrator | Sunday 05 April 2026 05:26:21 +0000 (0:00:01.121) 0:12:58.069 ********** 2026-04-05 05:27:02.957699 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:27:02.957713 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:27:02.957726 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:27:02.957738 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957751 | orchestrator | 2026-04-05 05:27:02.957764 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:27:02.957777 | orchestrator | Sunday 05 April 2026 05:26:22 +0000 (0:00:01.109) 0:12:59.178 ********** 2026-04-05 05:27:02.957792 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:27:02.957804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:27:02.957817 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:27:02.957829 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957842 | orchestrator | 2026-04-05 05:27:02.957855 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:27:02.957868 | orchestrator | Sunday 05 April 2026 05:26:23 +0000 (0:00:01.066) 0:13:00.244 ********** 2026-04-05 05:27:02.957880 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957892 | orchestrator | 2026-04-05 05:27:02.957905 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:27:02.957917 | orchestrator | Sunday 05 April 2026 05:26:24 +0000 (0:00:00.827) 0:13:01.072 ********** 2026-04-05 05:27:02.957931 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-05 05:27:02.957943 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.957956 | orchestrator | 2026-04-05 05:27:02.957969 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:27:02.957982 | orchestrator | Sunday 05 April 2026 05:26:25 +0000 (0:00:00.941) 0:13:02.014 ********** 2026-04-05 05:27:02.957994 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958009 | orchestrator | 2026-04-05 05:27:02.958078 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:27:02.958090 | orchestrator | Sunday 05 April 2026 05:26:27 +0000 (0:00:02.081) 0:13:04.096 ********** 2026-04-05 05:27:02.958101 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958112 | orchestrator | 2026-04-05 05:27:02.958122 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-05 05:27:02.958133 | orchestrator | Sunday 05 April 2026 05:26:28 +0000 (0:00:00.788) 0:13:04.884 ********** 2026-04-05 05:27:02.958144 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-04-05 05:27:02.958156 | orchestrator | 2026-04-05 05:27:02.958176 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-05 05:27:02.958187 | orchestrator | Sunday 05 April 2026 05:26:29 +0000 (0:00:01.241) 0:13:06.125 ********** 2026-04-05 05:27:02.958198 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-05 05:27:02.958209 | orchestrator | 2026-04-05 05:27:02.958219 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-05 05:27:02.958230 | orchestrator | Sunday 05 April 2026 05:26:32 +0000 (0:00:03.094) 0:13:09.220 ********** 2026-04-05 05:27:02.958241 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.958251 | orchestrator | 2026-04-05 05:27:02.958263 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-05 05:27:02.958291 | orchestrator | Sunday 05 April 2026 05:26:33 +0000 (0:00:01.173) 0:13:10.394 ********** 2026-04-05 05:27:02.958302 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958313 | orchestrator | 2026-04-05 05:27:02.958324 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-05 05:27:02.958335 | orchestrator | Sunday 05 April 2026 05:26:34 +0000 (0:00:01.127) 0:13:11.521 ********** 2026-04-05 05:27:02.958346 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958356 | orchestrator | 2026-04-05 05:27:02.958367 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-05 05:27:02.958378 | orchestrator | Sunday 05 April 2026 05:26:36 +0000 (0:00:01.201) 0:13:12.722 ********** 2026-04-05 05:27:02.958398 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:27:02.958409 | orchestrator | 2026-04-05 05:27:02.958419 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-05 05:27:02.958430 | orchestrator | Sunday 05 April 2026 05:26:38 +0000 (0:00:02.096) 0:13:14.819 ********** 2026-04-05 05:27:02.958441 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958452 | orchestrator | 2026-04-05 05:27:02.958462 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-05 05:27:02.958473 | orchestrator | Sunday 05 April 2026 05:26:39 +0000 (0:00:01.715) 0:13:16.534 ********** 2026-04-05 05:27:02.958484 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958494 | orchestrator | 2026-04-05 05:27:02.958505 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-05 05:27:02.958516 | orchestrator | Sunday 05 April 2026 05:26:41 +0000 (0:00:01.485) 0:13:18.019 ********** 2026-04-05 05:27:02.958527 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958537 | orchestrator | 2026-04-05 05:27:02.958548 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-05 05:27:02.958559 | orchestrator | Sunday 05 April 2026 05:26:42 +0000 (0:00:01.442) 0:13:19.462 ********** 2026-04-05 05:27:02.958569 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:27:02.958580 | orchestrator | 2026-04-05 05:27:02.958591 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-05 05:27:02.958646 | orchestrator | Sunday 05 April 2026 05:26:44 +0000 (0:00:01.978) 0:13:21.441 ********** 2026-04-05 05:27:02.958658 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:27:02.958669 | orchestrator | 2026-04-05 05:27:02.958680 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-05 05:27:02.958690 | orchestrator | Sunday 05 April 2026 05:26:46 +0000 (0:00:01.605) 0:13:23.046 ********** 2026-04-05 05:27:02.958713 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:27:02.958738 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-05 05:27:02.958755 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 05:27:02.958766 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-05 05:27:02.958777 | orchestrator | 2026-04-05 05:27:02.958788 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-05 05:27:02.958798 | orchestrator | Sunday 05 April 2026 05:26:50 +0000 (0:00:04.003) 0:13:27.050 ********** 2026-04-05 05:27:02.958809 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:27:02.958820 | orchestrator | 2026-04-05 05:27:02.958831 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-05 05:27:02.958841 | orchestrator | Sunday 05 April 2026 05:26:52 +0000 (0:00:02.069) 0:13:29.119 ********** 2026-04-05 05:27:02.958852 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958863 | orchestrator | 2026-04-05 05:27:02.958873 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-05 05:27:02.958884 | orchestrator | Sunday 05 April 2026 05:26:53 +0000 (0:00:01.158) 0:13:30.277 ********** 2026-04-05 05:27:02.958895 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958906 | orchestrator | 2026-04-05 05:27:02.958917 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-05 05:27:02.958927 | orchestrator | Sunday 05 April 2026 05:26:54 +0000 (0:00:01.206) 0:13:31.484 ********** 2026-04-05 05:27:02.958938 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958949 | orchestrator | 2026-04-05 05:27:02.958960 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-05 05:27:02.958971 | orchestrator | Sunday 05 April 2026 05:26:56 +0000 (0:00:01.829) 0:13:33.314 ********** 2026-04-05 05:27:02.958982 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:27:02.958992 | orchestrator | 2026-04-05 05:27:02.959003 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-05 05:27:02.959021 | orchestrator | Sunday 05 April 2026 05:26:58 +0000 (0:00:01.491) 0:13:34.805 ********** 2026-04-05 05:27:02.959032 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.959043 | orchestrator | 2026-04-05 05:27:02.959054 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-05 05:27:02.959064 | orchestrator | Sunday 05 April 2026 05:26:58 +0000 (0:00:00.823) 0:13:35.629 ********** 2026-04-05 05:27:02.959075 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-04-05 05:27:02.959086 | orchestrator | 2026-04-05 05:27:02.959097 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-05 05:27:02.959107 | orchestrator | Sunday 05 April 2026 05:27:00 +0000 (0:00:01.110) 0:13:36.739 ********** 2026-04-05 05:27:02.959118 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.959129 | orchestrator | 2026-04-05 05:27:02.959139 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-05 05:27:02.959150 | orchestrator | Sunday 05 April 2026 05:27:01 +0000 (0:00:01.190) 0:13:37.930 ********** 2026-04-05 05:27:02.959161 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:27:02.959171 | orchestrator | 2026-04-05 05:27:02.959182 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-05 05:27:02.959193 | orchestrator | Sunday 05 April 2026 05:27:02 +0000 (0:00:01.139) 0:13:39.069 ********** 2026-04-05 05:27:02.959204 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-04-05 05:27:02.959214 | orchestrator | 2026-04-05 05:27:02.959232 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-05 05:28:12.622455 | orchestrator | Sunday 05 April 2026 05:27:03 +0000 (0:00:01.304) 0:13:40.373 ********** 2026-04-05 05:28:12.622565 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:28:12.622581 | orchestrator | 2026-04-05 05:28:12.622594 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-05 05:28:12.622605 | orchestrator | Sunday 05 April 2026 05:27:06 +0000 (0:00:02.442) 0:13:42.816 ********** 2026-04-05 05:28:12.622616 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:28:12.622627 | orchestrator | 2026-04-05 05:28:12.622641 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-05 05:28:12.622660 | orchestrator | Sunday 05 April 2026 05:27:08 +0000 (0:00:01.970) 0:13:44.786 ********** 2026-04-05 05:28:12.622678 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:28:12.622695 | orchestrator | 2026-04-05 05:28:12.622714 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-05 05:28:12.622845 | orchestrator | Sunday 05 April 2026 05:27:10 +0000 (0:00:02.450) 0:13:47.237 ********** 2026-04-05 05:28:12.622864 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:28:12.622882 | orchestrator | 2026-04-05 05:28:12.622900 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-05 05:28:12.622917 | orchestrator | Sunday 05 April 2026 05:27:13 +0000 (0:00:03.004) 0:13:50.241 ********** 2026-04-05 05:28:12.622933 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-04-05 05:28:12.622951 | orchestrator | 2026-04-05 05:28:12.622969 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-05 05:28:12.622987 | orchestrator | Sunday 05 April 2026 05:27:14 +0000 (0:00:01.125) 0:13:51.367 ********** 2026-04-05 05:28:12.623005 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-05 05:28:12.623024 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:28:12.623043 | orchestrator | 2026-04-05 05:28:12.623062 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-05 05:28:12.623081 | orchestrator | Sunday 05 April 2026 05:27:37 +0000 (0:00:23.076) 0:14:14.443 ********** 2026-04-05 05:28:12.623102 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:28:12.623123 | orchestrator | 2026-04-05 05:28:12.623142 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-05 05:28:12.623163 | orchestrator | Sunday 05 April 2026 05:27:40 +0000 (0:00:02.782) 0:14:17.225 ********** 2026-04-05 05:28:12.623220 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:28:12.623244 | orchestrator | 2026-04-05 05:28:12.623264 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-05 05:28:12.623283 | orchestrator | Sunday 05 April 2026 05:27:41 +0000 (0:00:00.765) 0:14:17.991 ********** 2026-04-05 05:28:12.623322 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:28:12.623344 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:28:12.623364 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 05:28:12.623383 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 05:28:12.623403 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 05:28:12.623446 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}])  2026-04-05 05:28:12.623468 | orchestrator | 2026-04-05 05:28:12.623487 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-05 05:28:12.623506 | orchestrator | Sunday 05 April 2026 05:27:51 +0000 (0:00:09.854) 0:14:27.846 ********** 2026-04-05 05:28:12.623524 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:28:12.623543 | orchestrator | 2026-04-05 05:28:12.623560 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:28:12.623578 | orchestrator | Sunday 05 April 2026 05:27:53 +0000 (0:00:02.276) 0:14:30.122 ********** 2026-04-05 05:28:12.623596 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:28:12.623614 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-05 05:28:12.623632 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-05 05:28:12.623650 | orchestrator | 2026-04-05 05:28:12.623668 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:28:12.623686 | orchestrator | Sunday 05 April 2026 05:27:55 +0000 (0:00:01.597) 0:14:31.720 ********** 2026-04-05 05:28:12.623704 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:28:12.623775 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:28:12.623796 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:28:12.623816 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:28:12.623836 | orchestrator | 2026-04-05 05:28:12.623856 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-05 05:28:12.623876 | orchestrator | Sunday 05 April 2026 05:27:56 +0000 (0:00:01.123) 0:14:32.844 ********** 2026-04-05 05:28:12.623896 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:28:12.623916 | orchestrator | 2026-04-05 05:28:12.623936 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-05 05:28:12.623955 | orchestrator | Sunday 05 April 2026 05:27:56 +0000 (0:00:00.772) 0:14:33.616 ********** 2026-04-05 05:28:12.623975 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:28:12.623995 | orchestrator | 2026-04-05 05:28:12.624014 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-05 05:28:12.624034 | orchestrator | 2026-04-05 05:28:12.624054 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-05 05:28:12.624074 | orchestrator | Sunday 05 April 2026 05:27:59 +0000 (0:00:02.268) 0:14:35.885 ********** 2026-04-05 05:28:12.624095 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624115 | orchestrator | 2026-04-05 05:28:12.624135 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-05 05:28:12.624163 | orchestrator | Sunday 05 April 2026 05:28:00 +0000 (0:00:01.116) 0:14:37.001 ********** 2026-04-05 05:28:12.624183 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624202 | orchestrator | 2026-04-05 05:28:12.624222 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-05 05:28:12.624242 | orchestrator | Sunday 05 April 2026 05:28:01 +0000 (0:00:00.789) 0:14:37.790 ********** 2026-04-05 05:28:12.624261 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:12.624281 | orchestrator | 2026-04-05 05:28:12.624301 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-05 05:28:12.624320 | orchestrator | Sunday 05 April 2026 05:28:01 +0000 (0:00:00.779) 0:14:38.570 ********** 2026-04-05 05:28:12.624339 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624358 | orchestrator | 2026-04-05 05:28:12.624377 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:28:12.624397 | orchestrator | Sunday 05 April 2026 05:28:02 +0000 (0:00:00.819) 0:14:39.390 ********** 2026-04-05 05:28:12.624416 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-05 05:28:12.624435 | orchestrator | 2026-04-05 05:28:12.624454 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:28:12.624474 | orchestrator | Sunday 05 April 2026 05:28:03 +0000 (0:00:01.104) 0:14:40.494 ********** 2026-04-05 05:28:12.624494 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624514 | orchestrator | 2026-04-05 05:28:12.624534 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:28:12.624554 | orchestrator | Sunday 05 April 2026 05:28:05 +0000 (0:00:01.527) 0:14:42.022 ********** 2026-04-05 05:28:12.624574 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624594 | orchestrator | 2026-04-05 05:28:12.624614 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:28:12.624635 | orchestrator | Sunday 05 April 2026 05:28:06 +0000 (0:00:01.137) 0:14:43.160 ********** 2026-04-05 05:28:12.624655 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624675 | orchestrator | 2026-04-05 05:28:12.624695 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:28:12.624714 | orchestrator | Sunday 05 April 2026 05:28:07 +0000 (0:00:01.465) 0:14:44.625 ********** 2026-04-05 05:28:12.624757 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624775 | orchestrator | 2026-04-05 05:28:12.624792 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:28:12.624809 | orchestrator | Sunday 05 April 2026 05:28:09 +0000 (0:00:01.218) 0:14:45.844 ********** 2026-04-05 05:28:12.624838 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624857 | orchestrator | 2026-04-05 05:28:12.624875 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:28:12.624893 | orchestrator | Sunday 05 April 2026 05:28:10 +0000 (0:00:01.167) 0:14:47.012 ********** 2026-04-05 05:28:12.624910 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:12.624928 | orchestrator | 2026-04-05 05:28:12.624946 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:28:12.624965 | orchestrator | Sunday 05 April 2026 05:28:11 +0000 (0:00:01.141) 0:14:48.153 ********** 2026-04-05 05:28:12.624982 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:12.625000 | orchestrator | 2026-04-05 05:28:12.625018 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:28:12.625048 | orchestrator | Sunday 05 April 2026 05:28:12 +0000 (0:00:01.169) 0:14:49.323 ********** 2026-04-05 05:28:38.507499 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:38.507609 | orchestrator | 2026-04-05 05:28:38.507622 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:28:38.507631 | orchestrator | Sunday 05 April 2026 05:28:13 +0000 (0:00:01.147) 0:14:50.470 ********** 2026-04-05 05:28:38.507638 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:28:38.507645 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:28:38.507652 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:28:38.507658 | orchestrator | 2026-04-05 05:28:38.507665 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:28:38.507671 | orchestrator | Sunday 05 April 2026 05:28:15 +0000 (0:00:02.075) 0:14:52.545 ********** 2026-04-05 05:28:38.507678 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:38.507684 | orchestrator | 2026-04-05 05:28:38.507690 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:28:38.507696 | orchestrator | Sunday 05 April 2026 05:28:17 +0000 (0:00:01.345) 0:14:53.891 ********** 2026-04-05 05:28:38.507703 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:28:38.507709 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:28:38.507715 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:28:38.507721 | orchestrator | 2026-04-05 05:28:38.507727 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:28:38.507734 | orchestrator | Sunday 05 April 2026 05:28:20 +0000 (0:00:03.216) 0:14:57.107 ********** 2026-04-05 05:28:38.507741 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:28:38.507748 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:28:38.507810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:28:38.507819 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.507825 | orchestrator | 2026-04-05 05:28:38.507831 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:28:38.507837 | orchestrator | Sunday 05 April 2026 05:28:22 +0000 (0:00:01.789) 0:14:58.896 ********** 2026-04-05 05:28:38.507859 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:28:38.507868 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:28:38.507875 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:28:38.507900 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.507907 | orchestrator | 2026-04-05 05:28:38.507913 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:28:38.507920 | orchestrator | Sunday 05 April 2026 05:28:24 +0000 (0:00:02.028) 0:15:00.924 ********** 2026-04-05 05:28:38.507927 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:38.507936 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:38.507943 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:38.507949 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.507955 | orchestrator | 2026-04-05 05:28:38.507961 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:28:38.507967 | orchestrator | Sunday 05 April 2026 05:28:25 +0000 (0:00:01.378) 0:15:02.303 ********** 2026-04-05 05:28:38.507989 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:28:17.713142', 'end': '2026-04-05 05:28:17.757905', 'delta': '0:00:00.044763', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:28:38.508008 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:28:18.626874', 'end': '2026-04-05 05:28:18.682777', 'delta': '0:00:00.055903', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:28:38.508019 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd0e8f8775caf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:28:19.171814', 'end': '2026-04-05 05:28:19.205960', 'delta': '0:00:00.034146', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0e8f8775caf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:28:38.508031 | orchestrator | 2026-04-05 05:28:38.508038 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:28:38.508044 | orchestrator | Sunday 05 April 2026 05:28:26 +0000 (0:00:01.238) 0:15:03.542 ********** 2026-04-05 05:28:38.508050 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:38.508057 | orchestrator | 2026-04-05 05:28:38.508064 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:28:38.508072 | orchestrator | Sunday 05 April 2026 05:28:28 +0000 (0:00:01.275) 0:15:04.817 ********** 2026-04-05 05:28:38.508083 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.508095 | orchestrator | 2026-04-05 05:28:38.508105 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:28:38.508116 | orchestrator | Sunday 05 April 2026 05:28:29 +0000 (0:00:01.299) 0:15:06.116 ********** 2026-04-05 05:28:38.508127 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:38.508138 | orchestrator | 2026-04-05 05:28:38.508148 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:28:38.508158 | orchestrator | Sunday 05 April 2026 05:28:30 +0000 (0:00:01.229) 0:15:07.346 ********** 2026-04-05 05:28:38.508169 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-04-05 05:28:38.508180 | orchestrator | 2026-04-05 05:28:38.508191 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:28:38.508204 | orchestrator | Sunday 05 April 2026 05:28:32 +0000 (0:00:02.107) 0:15:09.453 ********** 2026-04-05 05:28:38.508215 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:38.508225 | orchestrator | 2026-04-05 05:28:38.508237 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:28:38.508247 | orchestrator | Sunday 05 April 2026 05:28:33 +0000 (0:00:01.174) 0:15:10.628 ********** 2026-04-05 05:28:38.508265 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.508277 | orchestrator | 2026-04-05 05:28:38.508288 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:28:38.508299 | orchestrator | Sunday 05 April 2026 05:28:35 +0000 (0:00:01.131) 0:15:11.760 ********** 2026-04-05 05:28:38.508309 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.508317 | orchestrator | 2026-04-05 05:28:38.508325 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:28:38.508332 | orchestrator | Sunday 05 April 2026 05:28:36 +0000 (0:00:01.208) 0:15:12.968 ********** 2026-04-05 05:28:38.508344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.508354 | orchestrator | 2026-04-05 05:28:38.508364 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:28:38.508375 | orchestrator | Sunday 05 April 2026 05:28:37 +0000 (0:00:01.126) 0:15:14.095 ********** 2026-04-05 05:28:38.508385 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:38.508396 | orchestrator | 2026-04-05 05:28:38.508407 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:28:38.508425 | orchestrator | Sunday 05 April 2026 05:28:38 +0000 (0:00:01.118) 0:15:15.213 ********** 2026-04-05 05:28:45.692203 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:45.692298 | orchestrator | 2026-04-05 05:28:45.692312 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:28:45.692324 | orchestrator | Sunday 05 April 2026 05:28:39 +0000 (0:00:01.137) 0:15:16.351 ********** 2026-04-05 05:28:45.692335 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:45.692345 | orchestrator | 2026-04-05 05:28:45.692355 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:28:45.692364 | orchestrator | Sunday 05 April 2026 05:28:40 +0000 (0:00:01.173) 0:15:17.524 ********** 2026-04-05 05:28:45.692395 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:45.692406 | orchestrator | 2026-04-05 05:28:45.692416 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:28:45.692425 | orchestrator | Sunday 05 April 2026 05:28:41 +0000 (0:00:01.110) 0:15:18.635 ********** 2026-04-05 05:28:45.692435 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:45.692444 | orchestrator | 2026-04-05 05:28:45.692454 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:28:45.692464 | orchestrator | Sunday 05 April 2026 05:28:43 +0000 (0:00:01.198) 0:15:19.833 ********** 2026-04-05 05:28:45.692474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:45.692483 | orchestrator | 2026-04-05 05:28:45.692493 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:28:45.692502 | orchestrator | Sunday 05 April 2026 05:28:44 +0000 (0:00:01.190) 0:15:21.024 ********** 2026-04-05 05:28:45.692514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:28:45.692577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e425300', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:28:45.692652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:28:45.692672 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:45.692682 | orchestrator | 2026-04-05 05:28:45.692692 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:28:45.692701 | orchestrator | Sunday 05 April 2026 05:28:45 +0000 (0:00:01.302) 0:15:22.327 ********** 2026-04-05 05:28:45.692712 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:45.692737 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.468863 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.468990 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.469059 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.469074 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.469085 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.469163 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e425300', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.469181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.469193 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:28:54.469206 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:54.469219 | orchestrator | 2026-04-05 05:28:54.469232 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:28:54.469244 | orchestrator | Sunday 05 April 2026 05:28:46 +0000 (0:00:01.266) 0:15:23.593 ********** 2026-04-05 05:28:54.469255 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:54.469276 | orchestrator | 2026-04-05 05:28:54.469287 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:28:54.469298 | orchestrator | Sunday 05 April 2026 05:28:48 +0000 (0:00:01.475) 0:15:25.069 ********** 2026-04-05 05:28:54.469308 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:54.469319 | orchestrator | 2026-04-05 05:28:54.469330 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:28:54.469343 | orchestrator | Sunday 05 April 2026 05:28:49 +0000 (0:00:01.114) 0:15:26.184 ********** 2026-04-05 05:28:54.469355 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:28:54.469367 | orchestrator | 2026-04-05 05:28:54.469381 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:28:54.469394 | orchestrator | Sunday 05 April 2026 05:28:50 +0000 (0:00:01.473) 0:15:27.657 ********** 2026-04-05 05:28:54.469407 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:54.469420 | orchestrator | 2026-04-05 05:28:54.469432 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:28:54.469445 | orchestrator | Sunday 05 April 2026 05:28:52 +0000 (0:00:01.093) 0:15:28.750 ********** 2026-04-05 05:28:54.469457 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:54.469469 | orchestrator | 2026-04-05 05:28:54.469482 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:28:54.469494 | orchestrator | Sunday 05 April 2026 05:28:53 +0000 (0:00:01.294) 0:15:30.045 ********** 2026-04-05 05:28:54.469506 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:28:54.469519 | orchestrator | 2026-04-05 05:28:54.469531 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:28:54.469550 | orchestrator | Sunday 05 April 2026 05:28:54 +0000 (0:00:01.132) 0:15:31.178 ********** 2026-04-05 05:29:33.990325 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-05 05:29:33.990442 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-05 05:29:33.990458 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:29:33.990470 | orchestrator | 2026-04-05 05:29:33.990482 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:29:33.990494 | orchestrator | Sunday 05 April 2026 05:28:56 +0000 (0:00:02.147) 0:15:33.325 ********** 2026-04-05 05:29:33.990505 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:29:33.990517 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:29:33.990527 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:29:33.990538 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.990549 | orchestrator | 2026-04-05 05:29:33.990560 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:29:33.990571 | orchestrator | Sunday 05 April 2026 05:28:57 +0000 (0:00:01.202) 0:15:34.528 ********** 2026-04-05 05:29:33.990582 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.990592 | orchestrator | 2026-04-05 05:29:33.990604 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:29:33.990614 | orchestrator | Sunday 05 April 2026 05:28:58 +0000 (0:00:01.165) 0:15:35.693 ********** 2026-04-05 05:29:33.990625 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:29:33.990636 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:29:33.990647 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:29:33.990658 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:29:33.990669 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:29:33.990695 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:29:33.990706 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:29:33.990717 | orchestrator | 2026-04-05 05:29:33.990750 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:29:33.990761 | orchestrator | Sunday 05 April 2026 05:29:01 +0000 (0:00:02.392) 0:15:38.086 ********** 2026-04-05 05:29:33.990772 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:29:33.990783 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:29:33.990793 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:29:33.990804 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:29:33.990815 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:29:33.990825 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:29:33.990864 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:29:33.990878 | orchestrator | 2026-04-05 05:29:33.990890 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-05 05:29:33.990903 | orchestrator | Sunday 05 April 2026 05:29:03 +0000 (0:00:02.267) 0:15:40.354 ********** 2026-04-05 05:29:33.990915 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.990928 | orchestrator | 2026-04-05 05:29:33.990940 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-05 05:29:33.990953 | orchestrator | Sunday 05 April 2026 05:29:04 +0000 (0:00:00.892) 0:15:41.247 ********** 2026-04-05 05:29:33.990965 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.990977 | orchestrator | 2026-04-05 05:29:33.990990 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-05 05:29:33.991002 | orchestrator | Sunday 05 April 2026 05:29:05 +0000 (0:00:00.907) 0:15:42.154 ********** 2026-04-05 05:29:33.991015 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991027 | orchestrator | 2026-04-05 05:29:33.991040 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-05 05:29:33.991053 | orchestrator | Sunday 05 April 2026 05:29:06 +0000 (0:00:00.773) 0:15:42.928 ********** 2026-04-05 05:29:33.991065 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991077 | orchestrator | 2026-04-05 05:29:33.991090 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-05 05:29:33.991101 | orchestrator | Sunday 05 April 2026 05:29:07 +0000 (0:00:00.886) 0:15:43.814 ********** 2026-04-05 05:29:33.991114 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991126 | orchestrator | 2026-04-05 05:29:33.991138 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-05 05:29:33.991151 | orchestrator | Sunday 05 April 2026 05:29:07 +0000 (0:00:00.794) 0:15:44.609 ********** 2026-04-05 05:29:33.991164 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:29:33.991176 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:29:33.991188 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:29:33.991201 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991213 | orchestrator | 2026-04-05 05:29:33.991226 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-05 05:29:33.991237 | orchestrator | Sunday 05 April 2026 05:29:08 +0000 (0:00:01.091) 0:15:45.701 ********** 2026-04-05 05:29:33.991247 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-05 05:29:33.991258 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-05 05:29:33.991286 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-05 05:29:33.991297 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-05 05:29:33.991308 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-05 05:29:33.991318 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-05 05:29:33.991337 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991348 | orchestrator | 2026-04-05 05:29:33.991359 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-05 05:29:33.991370 | orchestrator | Sunday 05 April 2026 05:29:10 +0000 (0:00:01.862) 0:15:47.563 ********** 2026-04-05 05:29:33.991381 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:29:33.991392 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:29:33.991402 | orchestrator | 2026-04-05 05:29:33.991413 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-05 05:29:33.991424 | orchestrator | Sunday 05 April 2026 05:29:13 +0000 (0:00:03.136) 0:15:50.699 ********** 2026-04-05 05:29:33.991434 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:29:33.991445 | orchestrator | 2026-04-05 05:29:33.991456 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:29:33.991467 | orchestrator | Sunday 05 April 2026 05:29:16 +0000 (0:00:02.185) 0:15:52.885 ********** 2026-04-05 05:29:33.991477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-05 05:29:33.991489 | orchestrator | 2026-04-05 05:29:33.991500 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:29:33.991511 | orchestrator | Sunday 05 April 2026 05:29:17 +0000 (0:00:01.271) 0:15:54.157 ********** 2026-04-05 05:29:33.991522 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-05 05:29:33.991533 | orchestrator | 2026-04-05 05:29:33.991549 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:29:33.991560 | orchestrator | Sunday 05 April 2026 05:29:18 +0000 (0:00:01.155) 0:15:55.313 ********** 2026-04-05 05:29:33.991571 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:29:33.991582 | orchestrator | 2026-04-05 05:29:33.991593 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:29:33.991603 | orchestrator | Sunday 05 April 2026 05:29:20 +0000 (0:00:01.582) 0:15:56.895 ********** 2026-04-05 05:29:33.991614 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991624 | orchestrator | 2026-04-05 05:29:33.991635 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:29:33.991646 | orchestrator | Sunday 05 April 2026 05:29:21 +0000 (0:00:01.152) 0:15:58.047 ********** 2026-04-05 05:29:33.991656 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991667 | orchestrator | 2026-04-05 05:29:33.991677 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:29:33.991688 | orchestrator | Sunday 05 April 2026 05:29:22 +0000 (0:00:01.151) 0:15:59.199 ********** 2026-04-05 05:29:33.991699 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991709 | orchestrator | 2026-04-05 05:29:33.991720 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:29:33.991731 | orchestrator | Sunday 05 April 2026 05:29:23 +0000 (0:00:01.180) 0:16:00.380 ********** 2026-04-05 05:29:33.991741 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:29:33.991752 | orchestrator | 2026-04-05 05:29:33.991763 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:29:33.991773 | orchestrator | Sunday 05 April 2026 05:29:25 +0000 (0:00:01.597) 0:16:01.977 ********** 2026-04-05 05:29:33.991784 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991794 | orchestrator | 2026-04-05 05:29:33.991805 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:29:33.991816 | orchestrator | Sunday 05 April 2026 05:29:26 +0000 (0:00:01.147) 0:16:03.124 ********** 2026-04-05 05:29:33.991826 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.991863 | orchestrator | 2026-04-05 05:29:33.991874 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:29:33.991885 | orchestrator | Sunday 05 April 2026 05:29:27 +0000 (0:00:01.119) 0:16:04.244 ********** 2026-04-05 05:29:33.991904 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:29:33.991915 | orchestrator | 2026-04-05 05:29:33.991925 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:29:33.991936 | orchestrator | Sunday 05 April 2026 05:29:29 +0000 (0:00:01.529) 0:16:05.774 ********** 2026-04-05 05:29:33.991946 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:29:33.991957 | orchestrator | 2026-04-05 05:29:33.991968 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:29:33.991978 | orchestrator | Sunday 05 April 2026 05:29:30 +0000 (0:00:01.614) 0:16:07.388 ********** 2026-04-05 05:29:33.991989 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.992000 | orchestrator | 2026-04-05 05:29:33.992010 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:29:33.992021 | orchestrator | Sunday 05 April 2026 05:29:31 +0000 (0:00:00.794) 0:16:08.183 ********** 2026-04-05 05:29:33.992032 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:29:33.992042 | orchestrator | 2026-04-05 05:29:33.992053 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:29:33.992064 | orchestrator | Sunday 05 April 2026 05:29:32 +0000 (0:00:00.927) 0:16:09.111 ********** 2026-04-05 05:29:33.992074 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.992085 | orchestrator | 2026-04-05 05:29:33.992096 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:29:33.992106 | orchestrator | Sunday 05 April 2026 05:29:33 +0000 (0:00:00.782) 0:16:09.893 ********** 2026-04-05 05:29:33.992117 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:29:33.992128 | orchestrator | 2026-04-05 05:29:33.992138 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:29:33.992149 | orchestrator | Sunday 05 April 2026 05:29:33 +0000 (0:00:00.748) 0:16:10.642 ********** 2026-04-05 05:29:33.992167 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749023 | orchestrator | 2026-04-05 05:30:14.749137 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:30:14.749153 | orchestrator | Sunday 05 April 2026 05:29:34 +0000 (0:00:00.780) 0:16:11.422 ********** 2026-04-05 05:30:14.749165 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749177 | orchestrator | 2026-04-05 05:30:14.749189 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:30:14.749199 | orchestrator | Sunday 05 April 2026 05:29:35 +0000 (0:00:00.786) 0:16:12.209 ********** 2026-04-05 05:30:14.749210 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749221 | orchestrator | 2026-04-05 05:30:14.749232 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:30:14.749243 | orchestrator | Sunday 05 April 2026 05:29:36 +0000 (0:00:00.768) 0:16:12.978 ********** 2026-04-05 05:30:14.749254 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.749266 | orchestrator | 2026-04-05 05:30:14.749277 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:30:14.749288 | orchestrator | Sunday 05 April 2026 05:29:37 +0000 (0:00:00.800) 0:16:13.779 ********** 2026-04-05 05:30:14.749299 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.749309 | orchestrator | 2026-04-05 05:30:14.749320 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:30:14.749331 | orchestrator | Sunday 05 April 2026 05:29:37 +0000 (0:00:00.822) 0:16:14.601 ********** 2026-04-05 05:30:14.749342 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.749353 | orchestrator | 2026-04-05 05:30:14.749364 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:30:14.749374 | orchestrator | Sunday 05 April 2026 05:29:38 +0000 (0:00:00.825) 0:16:15.427 ********** 2026-04-05 05:30:14.749385 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749396 | orchestrator | 2026-04-05 05:30:14.749407 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:30:14.749418 | orchestrator | Sunday 05 April 2026 05:29:39 +0000 (0:00:00.823) 0:16:16.251 ********** 2026-04-05 05:30:14.749472 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749484 | orchestrator | 2026-04-05 05:30:14.749495 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:30:14.749506 | orchestrator | Sunday 05 April 2026 05:29:40 +0000 (0:00:00.803) 0:16:17.054 ********** 2026-04-05 05:30:14.749517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749530 | orchestrator | 2026-04-05 05:30:14.749543 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:30:14.749556 | orchestrator | Sunday 05 April 2026 05:29:41 +0000 (0:00:00.745) 0:16:17.800 ********** 2026-04-05 05:30:14.749568 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749581 | orchestrator | 2026-04-05 05:30:14.749593 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:30:14.749606 | orchestrator | Sunday 05 April 2026 05:29:41 +0000 (0:00:00.911) 0:16:18.711 ********** 2026-04-05 05:30:14.749619 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749629 | orchestrator | 2026-04-05 05:30:14.749640 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:30:14.749651 | orchestrator | Sunday 05 April 2026 05:29:42 +0000 (0:00:00.777) 0:16:19.489 ********** 2026-04-05 05:30:14.749661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749672 | orchestrator | 2026-04-05 05:30:14.749683 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:30:14.749694 | orchestrator | Sunday 05 April 2026 05:29:43 +0000 (0:00:00.769) 0:16:20.259 ********** 2026-04-05 05:30:14.749705 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749715 | orchestrator | 2026-04-05 05:30:14.749726 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:30:14.749738 | orchestrator | Sunday 05 April 2026 05:29:44 +0000 (0:00:00.812) 0:16:21.071 ********** 2026-04-05 05:30:14.749748 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749759 | orchestrator | 2026-04-05 05:30:14.749770 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:30:14.749781 | orchestrator | Sunday 05 April 2026 05:29:45 +0000 (0:00:00.762) 0:16:21.833 ********** 2026-04-05 05:30:14.749791 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749802 | orchestrator | 2026-04-05 05:30:14.749813 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:30:14.749824 | orchestrator | Sunday 05 April 2026 05:29:45 +0000 (0:00:00.797) 0:16:22.631 ********** 2026-04-05 05:30:14.749835 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749845 | orchestrator | 2026-04-05 05:30:14.749856 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:30:14.749867 | orchestrator | Sunday 05 April 2026 05:29:46 +0000 (0:00:00.800) 0:16:23.432 ********** 2026-04-05 05:30:14.749878 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.749964 | orchestrator | 2026-04-05 05:30:14.749976 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:30:14.749987 | orchestrator | Sunday 05 April 2026 05:29:47 +0000 (0:00:00.819) 0:16:24.251 ********** 2026-04-05 05:30:14.749998 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750008 | orchestrator | 2026-04-05 05:30:14.750072 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:30:14.750087 | orchestrator | Sunday 05 April 2026 05:29:48 +0000 (0:00:00.782) 0:16:25.034 ********** 2026-04-05 05:30:14.750098 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.750109 | orchestrator | 2026-04-05 05:30:14.750120 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:30:14.750131 | orchestrator | Sunday 05 April 2026 05:29:50 +0000 (0:00:01.722) 0:16:26.756 ********** 2026-04-05 05:30:14.750142 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.750152 | orchestrator | 2026-04-05 05:30:14.750163 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:30:14.750174 | orchestrator | Sunday 05 April 2026 05:29:52 +0000 (0:00:02.021) 0:16:28.777 ********** 2026-04-05 05:30:14.750205 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-05 05:30:14.750217 | orchestrator | 2026-04-05 05:30:14.750247 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:30:14.750259 | orchestrator | Sunday 05 April 2026 05:29:53 +0000 (0:00:01.110) 0:16:29.888 ********** 2026-04-05 05:30:14.750269 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750280 | orchestrator | 2026-04-05 05:30:14.750291 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:30:14.750302 | orchestrator | Sunday 05 April 2026 05:29:54 +0000 (0:00:01.304) 0:16:31.193 ********** 2026-04-05 05:30:14.750312 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750323 | orchestrator | 2026-04-05 05:30:14.750334 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:30:14.750344 | orchestrator | Sunday 05 April 2026 05:29:55 +0000 (0:00:01.125) 0:16:32.319 ********** 2026-04-05 05:30:14.750355 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:30:14.750366 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:30:14.750376 | orchestrator | 2026-04-05 05:30:14.750387 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:30:14.750398 | orchestrator | Sunday 05 April 2026 05:29:57 +0000 (0:00:01.878) 0:16:34.198 ********** 2026-04-05 05:30:14.750409 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.750419 | orchestrator | 2026-04-05 05:30:14.750430 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:30:14.750441 | orchestrator | Sunday 05 April 2026 05:29:58 +0000 (0:00:01.484) 0:16:35.682 ********** 2026-04-05 05:30:14.750451 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750462 | orchestrator | 2026-04-05 05:30:14.750473 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:30:14.750483 | orchestrator | Sunday 05 April 2026 05:30:00 +0000 (0:00:01.214) 0:16:36.897 ********** 2026-04-05 05:30:14.750494 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750505 | orchestrator | 2026-04-05 05:30:14.750522 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:30:14.750533 | orchestrator | Sunday 05 April 2026 05:30:01 +0000 (0:00:00.832) 0:16:37.730 ********** 2026-04-05 05:30:14.750544 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750555 | orchestrator | 2026-04-05 05:30:14.750565 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:30:14.750576 | orchestrator | Sunday 05 April 2026 05:30:01 +0000 (0:00:00.880) 0:16:38.611 ********** 2026-04-05 05:30:14.750587 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-05 05:30:14.750598 | orchestrator | 2026-04-05 05:30:14.750609 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:30:14.750619 | orchestrator | Sunday 05 April 2026 05:30:03 +0000 (0:00:01.225) 0:16:39.836 ********** 2026-04-05 05:30:14.750630 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.750641 | orchestrator | 2026-04-05 05:30:14.750651 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:30:14.750662 | orchestrator | Sunday 05 April 2026 05:30:04 +0000 (0:00:01.728) 0:16:41.564 ********** 2026-04-05 05:30:14.750673 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:30:14.750683 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:30:14.750694 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:30:14.750705 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750715 | orchestrator | 2026-04-05 05:30:14.750726 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:30:14.750737 | orchestrator | Sunday 05 April 2026 05:30:06 +0000 (0:00:01.163) 0:16:42.728 ********** 2026-04-05 05:30:14.750768 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750779 | orchestrator | 2026-04-05 05:30:14.750789 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:30:14.750800 | orchestrator | Sunday 05 April 2026 05:30:07 +0000 (0:00:01.162) 0:16:43.890 ********** 2026-04-05 05:30:14.750811 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750821 | orchestrator | 2026-04-05 05:30:14.750832 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:30:14.750843 | orchestrator | Sunday 05 April 2026 05:30:08 +0000 (0:00:01.123) 0:16:45.013 ********** 2026-04-05 05:30:14.750853 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750864 | orchestrator | 2026-04-05 05:30:14.750874 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:30:14.750906 | orchestrator | Sunday 05 April 2026 05:30:09 +0000 (0:00:01.242) 0:16:46.256 ********** 2026-04-05 05:30:14.750917 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750928 | orchestrator | 2026-04-05 05:30:14.750939 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:30:14.750950 | orchestrator | Sunday 05 April 2026 05:30:10 +0000 (0:00:01.119) 0:16:47.376 ********** 2026-04-05 05:30:14.750960 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:14.750971 | orchestrator | 2026-04-05 05:30:14.750982 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:30:14.750992 | orchestrator | Sunday 05 April 2026 05:30:11 +0000 (0:00:00.801) 0:16:48.177 ********** 2026-04-05 05:30:14.751003 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.751014 | orchestrator | 2026-04-05 05:30:14.751024 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:30:14.751035 | orchestrator | Sunday 05 April 2026 05:30:13 +0000 (0:00:02.198) 0:16:50.375 ********** 2026-04-05 05:30:14.751046 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:14.751056 | orchestrator | 2026-04-05 05:30:14.751067 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:30:14.751078 | orchestrator | Sunday 05 April 2026 05:30:14 +0000 (0:00:00.803) 0:16:51.179 ********** 2026-04-05 05:30:14.751089 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-05 05:30:14.751099 | orchestrator | 2026-04-05 05:30:14.751117 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:30:52.195819 | orchestrator | Sunday 05 April 2026 05:30:15 +0000 (0:00:01.227) 0:16:52.407 ********** 2026-04-05 05:30:52.195916 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.195928 | orchestrator | 2026-04-05 05:30:52.195962 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:30:52.195970 | orchestrator | Sunday 05 April 2026 05:30:16 +0000 (0:00:01.225) 0:16:53.632 ********** 2026-04-05 05:30:52.195978 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.195985 | orchestrator | 2026-04-05 05:30:52.195993 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:30:52.196000 | orchestrator | Sunday 05 April 2026 05:30:18 +0000 (0:00:01.179) 0:16:54.812 ********** 2026-04-05 05:30:52.196007 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196015 | orchestrator | 2026-04-05 05:30:52.196022 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:30:52.196029 | orchestrator | Sunday 05 April 2026 05:30:19 +0000 (0:00:01.198) 0:16:56.011 ********** 2026-04-05 05:30:52.196037 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196044 | orchestrator | 2026-04-05 05:30:52.196051 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:30:52.196059 | orchestrator | Sunday 05 April 2026 05:30:20 +0000 (0:00:01.199) 0:16:57.210 ********** 2026-04-05 05:30:52.196066 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196073 | orchestrator | 2026-04-05 05:30:52.196080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:30:52.196108 | orchestrator | Sunday 05 April 2026 05:30:21 +0000 (0:00:01.158) 0:16:58.369 ********** 2026-04-05 05:30:52.196116 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196123 | orchestrator | 2026-04-05 05:30:52.196130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:30:52.196137 | orchestrator | Sunday 05 April 2026 05:30:22 +0000 (0:00:01.148) 0:16:59.517 ********** 2026-04-05 05:30:52.196156 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196163 | orchestrator | 2026-04-05 05:30:52.196171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:30:52.196178 | orchestrator | Sunday 05 April 2026 05:30:24 +0000 (0:00:01.202) 0:17:00.719 ********** 2026-04-05 05:30:52.196185 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196192 | orchestrator | 2026-04-05 05:30:52.196199 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:30:52.196206 | orchestrator | Sunday 05 April 2026 05:30:25 +0000 (0:00:01.145) 0:17:01.864 ********** 2026-04-05 05:30:52.196214 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:30:52.196222 | orchestrator | 2026-04-05 05:30:52.196229 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:30:52.196237 | orchestrator | Sunday 05 April 2026 05:30:25 +0000 (0:00:00.836) 0:17:02.701 ********** 2026-04-05 05:30:52.196244 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-05 05:30:52.196252 | orchestrator | 2026-04-05 05:30:52.196259 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:30:52.196266 | orchestrator | Sunday 05 April 2026 05:30:27 +0000 (0:00:01.173) 0:17:03.875 ********** 2026-04-05 05:30:52.196274 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-05 05:30:52.196281 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-05 05:30:52.196289 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-05 05:30:52.196296 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-05 05:30:52.196303 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-05 05:30:52.196310 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-05 05:30:52.196317 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-05 05:30:52.196324 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:30:52.196332 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:30:52.196339 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:30:52.196346 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:30:52.196354 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:30:52.196361 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:30:52.196368 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:30:52.196377 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-05 05:30:52.196386 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-05 05:30:52.196395 | orchestrator | 2026-04-05 05:30:52.196404 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:30:52.196412 | orchestrator | Sunday 05 April 2026 05:30:33 +0000 (0:00:06.384) 0:17:10.259 ********** 2026-04-05 05:30:52.196421 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196429 | orchestrator | 2026-04-05 05:30:52.196438 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:30:52.196446 | orchestrator | Sunday 05 April 2026 05:30:34 +0000 (0:00:00.807) 0:17:11.066 ********** 2026-04-05 05:30:52.196454 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196463 | orchestrator | 2026-04-05 05:30:52.196471 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:30:52.196479 | orchestrator | Sunday 05 April 2026 05:30:35 +0000 (0:00:00.783) 0:17:11.850 ********** 2026-04-05 05:30:52.196494 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196503 | orchestrator | 2026-04-05 05:30:52.196512 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:30:52.196520 | orchestrator | Sunday 05 April 2026 05:30:35 +0000 (0:00:00.854) 0:17:12.704 ********** 2026-04-05 05:30:52.196528 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196536 | orchestrator | 2026-04-05 05:30:52.196545 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:30:52.196568 | orchestrator | Sunday 05 April 2026 05:30:36 +0000 (0:00:00.758) 0:17:13.463 ********** 2026-04-05 05:30:52.196576 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196584 | orchestrator | 2026-04-05 05:30:52.196592 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:30:52.196601 | orchestrator | Sunday 05 April 2026 05:30:37 +0000 (0:00:00.772) 0:17:14.236 ********** 2026-04-05 05:30:52.196609 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196617 | orchestrator | 2026-04-05 05:30:52.196625 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:30:52.196634 | orchestrator | Sunday 05 April 2026 05:30:38 +0000 (0:00:00.796) 0:17:15.032 ********** 2026-04-05 05:30:52.196642 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196650 | orchestrator | 2026-04-05 05:30:52.196658 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:30:52.196666 | orchestrator | Sunday 05 April 2026 05:30:39 +0000 (0:00:00.956) 0:17:15.988 ********** 2026-04-05 05:30:52.196674 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196682 | orchestrator | 2026-04-05 05:30:52.196691 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:30:52.196699 | orchestrator | Sunday 05 April 2026 05:30:40 +0000 (0:00:00.784) 0:17:16.773 ********** 2026-04-05 05:30:52.196708 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196716 | orchestrator | 2026-04-05 05:30:52.196725 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:30:52.196733 | orchestrator | Sunday 05 April 2026 05:30:40 +0000 (0:00:00.858) 0:17:17.632 ********** 2026-04-05 05:30:52.196740 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196747 | orchestrator | 2026-04-05 05:30:52.196754 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:30:52.196766 | orchestrator | Sunday 05 April 2026 05:30:41 +0000 (0:00:00.780) 0:17:18.413 ********** 2026-04-05 05:30:52.196773 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196780 | orchestrator | 2026-04-05 05:30:52.196787 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:30:52.196794 | orchestrator | Sunday 05 April 2026 05:30:42 +0000 (0:00:00.810) 0:17:19.223 ********** 2026-04-05 05:30:52.196801 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196808 | orchestrator | 2026-04-05 05:30:52.196815 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:30:52.196823 | orchestrator | Sunday 05 April 2026 05:30:43 +0000 (0:00:00.778) 0:17:20.002 ********** 2026-04-05 05:30:52.196830 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196837 | orchestrator | 2026-04-05 05:30:52.196844 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:30:52.196851 | orchestrator | Sunday 05 April 2026 05:30:44 +0000 (0:00:00.903) 0:17:20.906 ********** 2026-04-05 05:30:52.196858 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196865 | orchestrator | 2026-04-05 05:30:52.196872 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:30:52.196879 | orchestrator | Sunday 05 April 2026 05:30:45 +0000 (0:00:00.825) 0:17:21.731 ********** 2026-04-05 05:30:52.196886 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196893 | orchestrator | 2026-04-05 05:30:52.196900 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:30:52.196913 | orchestrator | Sunday 05 April 2026 05:30:45 +0000 (0:00:00.919) 0:17:22.650 ********** 2026-04-05 05:30:52.196920 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196927 | orchestrator | 2026-04-05 05:30:52.196959 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:30:52.196967 | orchestrator | Sunday 05 April 2026 05:30:46 +0000 (0:00:00.771) 0:17:23.422 ********** 2026-04-05 05:30:52.196974 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.196981 | orchestrator | 2026-04-05 05:30:52.196988 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:30:52.196997 | orchestrator | Sunday 05 April 2026 05:30:47 +0000 (0:00:00.792) 0:17:24.215 ********** 2026-04-05 05:30:52.197004 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.197011 | orchestrator | 2026-04-05 05:30:52.197018 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:30:52.197025 | orchestrator | Sunday 05 April 2026 05:30:48 +0000 (0:00:00.798) 0:17:25.013 ********** 2026-04-05 05:30:52.197032 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.197039 | orchestrator | 2026-04-05 05:30:52.197046 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:30:52.197053 | orchestrator | Sunday 05 April 2026 05:30:49 +0000 (0:00:00.800) 0:17:25.814 ********** 2026-04-05 05:30:52.197060 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.197068 | orchestrator | 2026-04-05 05:30:52.197075 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:30:52.197082 | orchestrator | Sunday 05 April 2026 05:30:49 +0000 (0:00:00.879) 0:17:26.694 ********** 2026-04-05 05:30:52.197089 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.197096 | orchestrator | 2026-04-05 05:30:52.197103 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:30:52.197111 | orchestrator | Sunday 05 April 2026 05:30:50 +0000 (0:00:00.845) 0:17:27.539 ********** 2026-04-05 05:30:52.197118 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:30:52.197125 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:30:52.197132 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:30:52.197139 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:30:52.197146 | orchestrator | 2026-04-05 05:30:52.197153 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:30:52.197161 | orchestrator | Sunday 05 April 2026 05:30:51 +0000 (0:00:01.116) 0:17:28.655 ********** 2026-04-05 05:30:52.197168 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:30:52.197179 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:32:12.776991 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:32:12.777192 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.777219 | orchestrator | 2026-04-05 05:32:12.777238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:32:12.777258 | orchestrator | Sunday 05 April 2026 05:30:53 +0000 (0:00:01.171) 0:17:29.826 ********** 2026-04-05 05:32:12.777277 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:32:12.777296 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:32:12.777314 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:32:12.777334 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.777353 | orchestrator | 2026-04-05 05:32:12.777372 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:32:12.777392 | orchestrator | Sunday 05 April 2026 05:30:54 +0000 (0:00:01.106) 0:17:30.933 ********** 2026-04-05 05:32:12.777411 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.777430 | orchestrator | 2026-04-05 05:32:12.777448 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:32:12.777500 | orchestrator | Sunday 05 April 2026 05:30:55 +0000 (0:00:00.789) 0:17:31.723 ********** 2026-04-05 05:32:12.777520 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-05 05:32:12.777542 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.777562 | orchestrator | 2026-04-05 05:32:12.777582 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:32:12.777698 | orchestrator | Sunday 05 April 2026 05:30:55 +0000 (0:00:00.948) 0:17:32.672 ********** 2026-04-05 05:32:12.777723 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.777741 | orchestrator | 2026-04-05 05:32:12.777760 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:32:12.777795 | orchestrator | Sunday 05 April 2026 05:30:57 +0000 (0:00:01.437) 0:17:34.109 ********** 2026-04-05 05:32:12.777808 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.777819 | orchestrator | 2026-04-05 05:32:12.777830 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-05 05:32:12.777841 | orchestrator | Sunday 05 April 2026 05:30:58 +0000 (0:00:00.800) 0:17:34.910 ********** 2026-04-05 05:32:12.777853 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-04-05 05:32:12.777864 | orchestrator | 2026-04-05 05:32:12.777876 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-05 05:32:12.777887 | orchestrator | Sunday 05 April 2026 05:30:59 +0000 (0:00:01.150) 0:17:36.061 ********** 2026-04-05 05:32:12.777899 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.777908 | orchestrator | 2026-04-05 05:32:12.777918 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-05 05:32:12.777927 | orchestrator | Sunday 05 April 2026 05:31:02 +0000 (0:00:03.574) 0:17:39.636 ********** 2026-04-05 05:32:12.777937 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.777946 | orchestrator | 2026-04-05 05:32:12.777956 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-05 05:32:12.777965 | orchestrator | Sunday 05 April 2026 05:31:04 +0000 (0:00:01.344) 0:17:40.981 ********** 2026-04-05 05:32:12.777975 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.777984 | orchestrator | 2026-04-05 05:32:12.777994 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-05 05:32:12.778003 | orchestrator | Sunday 05 April 2026 05:31:05 +0000 (0:00:01.182) 0:17:42.163 ********** 2026-04-05 05:32:12.778013 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778106 | orchestrator | 2026-04-05 05:32:12.778126 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-05 05:32:12.778136 | orchestrator | Sunday 05 April 2026 05:31:06 +0000 (0:00:01.187) 0:17:43.351 ********** 2026-04-05 05:32:12.778145 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:32:12.778155 | orchestrator | 2026-04-05 05:32:12.778165 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-05 05:32:12.778175 | orchestrator | Sunday 05 April 2026 05:31:08 +0000 (0:00:02.045) 0:17:45.397 ********** 2026-04-05 05:32:12.778184 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778193 | orchestrator | 2026-04-05 05:32:12.778203 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-05 05:32:12.778212 | orchestrator | Sunday 05 April 2026 05:31:10 +0000 (0:00:01.677) 0:17:47.074 ********** 2026-04-05 05:32:12.778221 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778231 | orchestrator | 2026-04-05 05:32:12.778240 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-05 05:32:12.778250 | orchestrator | Sunday 05 April 2026 05:31:11 +0000 (0:00:01.452) 0:17:48.527 ********** 2026-04-05 05:32:12.778259 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778268 | orchestrator | 2026-04-05 05:32:12.778278 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-05 05:32:12.778287 | orchestrator | Sunday 05 April 2026 05:31:13 +0000 (0:00:01.550) 0:17:50.078 ********** 2026-04-05 05:32:12.778297 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:32:12.778317 | orchestrator | 2026-04-05 05:32:12.778327 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-05 05:32:12.778336 | orchestrator | Sunday 05 April 2026 05:31:14 +0000 (0:00:01.599) 0:17:51.677 ********** 2026-04-05 05:32:12.778346 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:32:12.778355 | orchestrator | 2026-04-05 05:32:12.778365 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-05 05:32:12.778374 | orchestrator | Sunday 05 April 2026 05:31:16 +0000 (0:00:01.524) 0:17:53.202 ********** 2026-04-05 05:32:12.778384 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:32:12.778393 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-05 05:32:12.778403 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-05 05:32:12.778412 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-05 05:32:12.778422 | orchestrator | 2026-04-05 05:32:12.778451 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-05 05:32:12.778462 | orchestrator | Sunday 05 April 2026 05:31:20 +0000 (0:00:03.971) 0:17:57.174 ********** 2026-04-05 05:32:12.778471 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:32:12.778481 | orchestrator | 2026-04-05 05:32:12.778491 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-05 05:32:12.778500 | orchestrator | Sunday 05 April 2026 05:31:22 +0000 (0:00:01.978) 0:17:59.152 ********** 2026-04-05 05:32:12.778510 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778519 | orchestrator | 2026-04-05 05:32:12.778529 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-05 05:32:12.778538 | orchestrator | Sunday 05 April 2026 05:31:23 +0000 (0:00:01.176) 0:18:00.328 ********** 2026-04-05 05:32:12.778547 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778557 | orchestrator | 2026-04-05 05:32:12.778566 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-05 05:32:12.778575 | orchestrator | Sunday 05 April 2026 05:31:24 +0000 (0:00:01.133) 0:18:01.462 ********** 2026-04-05 05:32:12.778585 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778594 | orchestrator | 2026-04-05 05:32:12.778604 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-05 05:32:12.778613 | orchestrator | Sunday 05 April 2026 05:31:26 +0000 (0:00:02.210) 0:18:03.673 ********** 2026-04-05 05:32:12.778622 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778632 | orchestrator | 2026-04-05 05:32:12.778641 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-05 05:32:12.778651 | orchestrator | Sunday 05 April 2026 05:31:28 +0000 (0:00:01.439) 0:18:05.113 ********** 2026-04-05 05:32:12.778660 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.778669 | orchestrator | 2026-04-05 05:32:12.778679 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-05 05:32:12.778695 | orchestrator | Sunday 05 April 2026 05:31:29 +0000 (0:00:00.788) 0:18:05.901 ********** 2026-04-05 05:32:12.778704 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-04-05 05:32:12.778714 | orchestrator | 2026-04-05 05:32:12.778723 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-05 05:32:12.778733 | orchestrator | Sunday 05 April 2026 05:31:30 +0000 (0:00:01.129) 0:18:07.030 ********** 2026-04-05 05:32:12.778742 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.778751 | orchestrator | 2026-04-05 05:32:12.778761 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-05 05:32:12.778770 | orchestrator | Sunday 05 April 2026 05:31:31 +0000 (0:00:01.100) 0:18:08.131 ********** 2026-04-05 05:32:12.778779 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.778789 | orchestrator | 2026-04-05 05:32:12.778798 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-05 05:32:12.778808 | orchestrator | Sunday 05 April 2026 05:31:32 +0000 (0:00:01.117) 0:18:09.248 ********** 2026-04-05 05:32:12.778817 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-04-05 05:32:12.778832 | orchestrator | 2026-04-05 05:32:12.778842 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-05 05:32:12.778851 | orchestrator | Sunday 05 April 2026 05:31:33 +0000 (0:00:01.168) 0:18:10.417 ********** 2026-04-05 05:32:12.778861 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778870 | orchestrator | 2026-04-05 05:32:12.778879 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-05 05:32:12.778888 | orchestrator | Sunday 05 April 2026 05:31:36 +0000 (0:00:02.344) 0:18:12.761 ********** 2026-04-05 05:32:12.778898 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778907 | orchestrator | 2026-04-05 05:32:12.778916 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-05 05:32:12.778926 | orchestrator | Sunday 05 April 2026 05:31:37 +0000 (0:00:01.940) 0:18:14.702 ********** 2026-04-05 05:32:12.778935 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.778944 | orchestrator | 2026-04-05 05:32:12.778954 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-05 05:32:12.778963 | orchestrator | Sunday 05 April 2026 05:31:40 +0000 (0:00:02.420) 0:18:17.122 ********** 2026-04-05 05:32:12.778972 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:32:12.778982 | orchestrator | 2026-04-05 05:32:12.778991 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-05 05:32:12.779000 | orchestrator | Sunday 05 April 2026 05:31:43 +0000 (0:00:02.963) 0:18:20.085 ********** 2026-04-05 05:32:12.779010 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-04-05 05:32:12.779019 | orchestrator | 2026-04-05 05:32:12.779029 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-05 05:32:12.779075 | orchestrator | Sunday 05 April 2026 05:31:44 +0000 (0:00:01.277) 0:18:21.362 ********** 2026-04-05 05:32:12.779085 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-05 05:32:12.779095 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.779105 | orchestrator | 2026-04-05 05:32:12.779114 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-05 05:32:12.779123 | orchestrator | Sunday 05 April 2026 05:32:07 +0000 (0:00:22.966) 0:18:44.329 ********** 2026-04-05 05:32:12.779161 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:12.779171 | orchestrator | 2026-04-05 05:32:12.779180 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-05 05:32:12.779190 | orchestrator | Sunday 05 April 2026 05:32:10 +0000 (0:00:02.637) 0:18:46.967 ********** 2026-04-05 05:32:12.779199 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:12.779208 | orchestrator | 2026-04-05 05:32:12.779217 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-05 05:32:12.779227 | orchestrator | Sunday 05 April 2026 05:32:11 +0000 (0:00:00.781) 0:18:47.749 ********** 2026-04-05 05:32:12.779246 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:32:55.473898 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-05 05:32:55.473988 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-05 05:32:55.474064 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-05 05:32:55.474075 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-05 05:32:55.474082 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9ae5f50709cf0b3309dff18ba85bfaa52d5615bf'}])  2026-04-05 05:32:55.474121 | orchestrator | 2026-04-05 05:32:55.474129 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-05 05:32:55.474136 | orchestrator | Sunday 05 April 2026 05:32:20 +0000 (0:00:09.494) 0:18:57.243 ********** 2026-04-05 05:32:55.474142 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:32:55.474148 | orchestrator | 2026-04-05 05:32:55.474154 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:32:55.474160 | orchestrator | Sunday 05 April 2026 05:32:22 +0000 (0:00:02.118) 0:18:59.361 ********** 2026-04-05 05:32:55.474166 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:32:55.474173 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-05 05:32:55.474178 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-05 05:32:55.474184 | orchestrator | 2026-04-05 05:32:55.474190 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:32:55.474196 | orchestrator | Sunday 05 April 2026 05:32:24 +0000 (0:00:01.901) 0:19:01.263 ********** 2026-04-05 05:32:55.474201 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:32:55.474207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:32:55.474213 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:32:55.474219 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:55.474224 | orchestrator | 2026-04-05 05:32:55.474230 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-05 05:32:55.474236 | orchestrator | Sunday 05 April 2026 05:32:25 +0000 (0:00:01.133) 0:19:02.397 ********** 2026-04-05 05:32:55.474241 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:32:55.474247 | orchestrator | 2026-04-05 05:32:55.474253 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-05 05:32:55.474259 | orchestrator | Sunday 05 April 2026 05:32:26 +0000 (0:00:00.765) 0:19:03.163 ********** 2026-04-05 05:32:55.474264 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:55.474271 | orchestrator | 2026-04-05 05:32:55.474277 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-04-05 05:32:55.474282 | orchestrator | 2026-04-05 05:32:55.474288 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-04-05 05:32:55.474294 | orchestrator | Sunday 05 April 2026 05:32:29 +0000 (0:00:02.991) 0:19:06.155 ********** 2026-04-05 05:32:55.474299 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:32:55.474305 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:32:55.474316 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:32:55.474322 | orchestrator | 2026-04-05 05:32:55.474328 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-05 05:32:55.474333 | orchestrator | 2026-04-05 05:32:55.474339 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-05 05:32:55.474345 | orchestrator | Sunday 05 April 2026 05:32:31 +0000 (0:00:01.572) 0:19:07.728 ********** 2026-04-05 05:32:55.474350 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474356 | orchestrator | 2026-04-05 05:32:55.474362 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:32:55.474380 | orchestrator | Sunday 05 April 2026 05:32:32 +0000 (0:00:01.192) 0:19:08.921 ********** 2026-04-05 05:32:55.474387 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474392 | orchestrator | 2026-04-05 05:32:55.474398 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:32:55.474404 | orchestrator | Sunday 05 April 2026 05:32:33 +0000 (0:00:01.150) 0:19:10.072 ********** 2026-04-05 05:32:55.474410 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474415 | orchestrator | 2026-04-05 05:32:55.474421 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:32:55.474427 | orchestrator | Sunday 05 April 2026 05:32:34 +0000 (0:00:01.153) 0:19:11.225 ********** 2026-04-05 05:32:55.474432 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474438 | orchestrator | 2026-04-05 05:32:55.474444 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:32:55.474449 | orchestrator | Sunday 05 April 2026 05:32:35 +0000 (0:00:01.157) 0:19:12.383 ********** 2026-04-05 05:32:55.474455 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474462 | orchestrator | 2026-04-05 05:32:55.474469 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:32:55.474476 | orchestrator | Sunday 05 April 2026 05:32:36 +0000 (0:00:01.168) 0:19:13.552 ********** 2026-04-05 05:32:55.474483 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474489 | orchestrator | 2026-04-05 05:32:55.474496 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:32:55.474507 | orchestrator | Sunday 05 April 2026 05:32:38 +0000 (0:00:01.196) 0:19:14.748 ********** 2026-04-05 05:32:55.474514 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474520 | orchestrator | 2026-04-05 05:32:55.474527 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:32:55.474534 | orchestrator | Sunday 05 April 2026 05:32:39 +0000 (0:00:01.122) 0:19:15.870 ********** 2026-04-05 05:32:55.474540 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474546 | orchestrator | 2026-04-05 05:32:55.474553 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:32:55.474560 | orchestrator | Sunday 05 April 2026 05:32:40 +0000 (0:00:01.161) 0:19:17.032 ********** 2026-04-05 05:32:55.474567 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474573 | orchestrator | 2026-04-05 05:32:55.474579 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:32:55.474586 | orchestrator | Sunday 05 April 2026 05:32:41 +0000 (0:00:01.163) 0:19:18.195 ********** 2026-04-05 05:32:55.474592 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474599 | orchestrator | 2026-04-05 05:32:55.474605 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:32:55.474612 | orchestrator | Sunday 05 April 2026 05:32:42 +0000 (0:00:01.155) 0:19:19.351 ********** 2026-04-05 05:32:55.474619 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474625 | orchestrator | 2026-04-05 05:32:55.474632 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:32:55.474638 | orchestrator | Sunday 05 April 2026 05:32:43 +0000 (0:00:01.137) 0:19:20.488 ********** 2026-04-05 05:32:55.474645 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474651 | orchestrator | 2026-04-05 05:32:55.474658 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:32:55.474669 | orchestrator | Sunday 05 April 2026 05:32:44 +0000 (0:00:01.140) 0:19:21.629 ********** 2026-04-05 05:32:55.474675 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474682 | orchestrator | 2026-04-05 05:32:55.474688 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:32:55.474694 | orchestrator | Sunday 05 April 2026 05:32:46 +0000 (0:00:01.148) 0:19:22.777 ********** 2026-04-05 05:32:55.474701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474708 | orchestrator | 2026-04-05 05:32:55.474715 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:32:55.474721 | orchestrator | Sunday 05 April 2026 05:32:47 +0000 (0:00:01.157) 0:19:23.935 ********** 2026-04-05 05:32:55.474728 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474734 | orchestrator | 2026-04-05 05:32:55.474741 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:32:55.474747 | orchestrator | Sunday 05 April 2026 05:32:48 +0000 (0:00:01.131) 0:19:25.067 ********** 2026-04-05 05:32:55.474754 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474761 | orchestrator | 2026-04-05 05:32:55.474767 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:32:55.474774 | orchestrator | Sunday 05 April 2026 05:32:49 +0000 (0:00:01.121) 0:19:26.188 ********** 2026-04-05 05:32:55.474781 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474787 | orchestrator | 2026-04-05 05:32:55.474793 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:32:55.474800 | orchestrator | Sunday 05 April 2026 05:32:50 +0000 (0:00:01.206) 0:19:27.395 ********** 2026-04-05 05:32:55.474807 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474813 | orchestrator | 2026-04-05 05:32:55.474820 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:32:55.474827 | orchestrator | Sunday 05 April 2026 05:32:51 +0000 (0:00:01.116) 0:19:28.511 ********** 2026-04-05 05:32:55.474833 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474840 | orchestrator | 2026-04-05 05:32:55.474847 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:32:55.474854 | orchestrator | Sunday 05 April 2026 05:32:53 +0000 (0:00:01.248) 0:19:29.760 ********** 2026-04-05 05:32:55.474860 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474867 | orchestrator | 2026-04-05 05:32:55.474874 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:32:55.474881 | orchestrator | Sunday 05 April 2026 05:32:54 +0000 (0:00:01.152) 0:19:30.913 ********** 2026-04-05 05:32:55.474886 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:32:55.474892 | orchestrator | 2026-04-05 05:32:55.474898 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:32:55.474903 | orchestrator | Sunday 05 April 2026 05:32:55 +0000 (0:00:01.191) 0:19:32.104 ********** 2026-04-05 05:32:55.474912 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.692952 | orchestrator | 2026-04-05 05:33:40.693062 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:33:40.693079 | orchestrator | Sunday 05 April 2026 05:32:56 +0000 (0:00:01.149) 0:19:33.254 ********** 2026-04-05 05:33:40.693090 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693101 | orchestrator | 2026-04-05 05:33:40.693111 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:33:40.693121 | orchestrator | Sunday 05 April 2026 05:32:57 +0000 (0:00:01.099) 0:19:34.354 ********** 2026-04-05 05:33:40.693130 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693140 | orchestrator | 2026-04-05 05:33:40.693193 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:33:40.693203 | orchestrator | Sunday 05 April 2026 05:32:58 +0000 (0:00:01.125) 0:19:35.480 ********** 2026-04-05 05:33:40.693213 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693223 | orchestrator | 2026-04-05 05:33:40.693232 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:33:40.693265 | orchestrator | Sunday 05 April 2026 05:32:59 +0000 (0:00:01.156) 0:19:36.636 ********** 2026-04-05 05:33:40.693276 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693285 | orchestrator | 2026-04-05 05:33:40.693295 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:33:40.693304 | orchestrator | Sunday 05 April 2026 05:33:01 +0000 (0:00:01.161) 0:19:37.797 ********** 2026-04-05 05:33:40.693314 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693323 | orchestrator | 2026-04-05 05:33:40.693347 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:33:40.693357 | orchestrator | Sunday 05 April 2026 05:33:02 +0000 (0:00:01.159) 0:19:38.957 ********** 2026-04-05 05:33:40.693367 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693376 | orchestrator | 2026-04-05 05:33:40.693386 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:33:40.693395 | orchestrator | Sunday 05 April 2026 05:33:03 +0000 (0:00:01.131) 0:19:40.089 ********** 2026-04-05 05:33:40.693404 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693414 | orchestrator | 2026-04-05 05:33:40.693423 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:33:40.693432 | orchestrator | Sunday 05 April 2026 05:33:04 +0000 (0:00:01.138) 0:19:41.228 ********** 2026-04-05 05:33:40.693442 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693451 | orchestrator | 2026-04-05 05:33:40.693461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:33:40.693471 | orchestrator | Sunday 05 April 2026 05:33:05 +0000 (0:00:01.146) 0:19:42.374 ********** 2026-04-05 05:33:40.693480 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693490 | orchestrator | 2026-04-05 05:33:40.693501 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:33:40.693512 | orchestrator | Sunday 05 April 2026 05:33:06 +0000 (0:00:01.241) 0:19:43.615 ********** 2026-04-05 05:33:40.693523 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693534 | orchestrator | 2026-04-05 05:33:40.693545 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:33:40.693556 | orchestrator | Sunday 05 April 2026 05:33:08 +0000 (0:00:01.115) 0:19:44.731 ********** 2026-04-05 05:33:40.693568 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693579 | orchestrator | 2026-04-05 05:33:40.693590 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:33:40.693601 | orchestrator | Sunday 05 April 2026 05:33:09 +0000 (0:00:01.247) 0:19:45.978 ********** 2026-04-05 05:33:40.693612 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693623 | orchestrator | 2026-04-05 05:33:40.693634 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:33:40.693645 | orchestrator | Sunday 05 April 2026 05:33:10 +0000 (0:00:01.178) 0:19:47.157 ********** 2026-04-05 05:33:40.693656 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693668 | orchestrator | 2026-04-05 05:33:40.693679 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:33:40.693689 | orchestrator | Sunday 05 April 2026 05:33:11 +0000 (0:00:01.146) 0:19:48.303 ********** 2026-04-05 05:33:40.693699 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693710 | orchestrator | 2026-04-05 05:33:40.693721 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:33:40.693732 | orchestrator | Sunday 05 April 2026 05:33:12 +0000 (0:00:01.115) 0:19:49.419 ********** 2026-04-05 05:33:40.693743 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693753 | orchestrator | 2026-04-05 05:33:40.693765 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:33:40.693776 | orchestrator | Sunday 05 April 2026 05:33:13 +0000 (0:00:01.111) 0:19:50.530 ********** 2026-04-05 05:33:40.693787 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693797 | orchestrator | 2026-04-05 05:33:40.693817 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:33:40.693827 | orchestrator | Sunday 05 April 2026 05:33:14 +0000 (0:00:01.152) 0:19:51.683 ********** 2026-04-05 05:33:40.693836 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693846 | orchestrator | 2026-04-05 05:33:40.693855 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:33:40.693866 | orchestrator | Sunday 05 April 2026 05:33:16 +0000 (0:00:01.166) 0:19:52.850 ********** 2026-04-05 05:33:40.693875 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693885 | orchestrator | 2026-04-05 05:33:40.693894 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:33:40.693904 | orchestrator | Sunday 05 April 2026 05:33:17 +0000 (0:00:01.159) 0:19:54.010 ********** 2026-04-05 05:33:40.693913 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693922 | orchestrator | 2026-04-05 05:33:40.693932 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:33:40.693942 | orchestrator | Sunday 05 April 2026 05:33:18 +0000 (0:00:01.184) 0:19:55.194 ********** 2026-04-05 05:33:40.693967 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.693977 | orchestrator | 2026-04-05 05:33:40.693987 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:33:40.693996 | orchestrator | Sunday 05 April 2026 05:33:19 +0000 (0:00:01.171) 0:19:56.366 ********** 2026-04-05 05:33:40.694006 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694072 | orchestrator | 2026-04-05 05:33:40.694085 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:33:40.694095 | orchestrator | Sunday 05 April 2026 05:33:21 +0000 (0:00:01.432) 0:19:57.798 ********** 2026-04-05 05:33:40.694105 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694114 | orchestrator | 2026-04-05 05:33:40.694124 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:33:40.694134 | orchestrator | Sunday 05 April 2026 05:33:22 +0000 (0:00:01.173) 0:19:58.972 ********** 2026-04-05 05:33:40.694143 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694175 | orchestrator | 2026-04-05 05:33:40.694185 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:33:40.694194 | orchestrator | Sunday 05 April 2026 05:33:23 +0000 (0:00:01.138) 0:20:00.111 ********** 2026-04-05 05:33:40.694204 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694213 | orchestrator | 2026-04-05 05:33:40.694223 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:33:40.694232 | orchestrator | Sunday 05 April 2026 05:33:24 +0000 (0:00:01.296) 0:20:01.407 ********** 2026-04-05 05:33:40.694248 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694257 | orchestrator | 2026-04-05 05:33:40.694267 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:33:40.694277 | orchestrator | Sunday 05 April 2026 05:33:25 +0000 (0:00:01.151) 0:20:02.559 ********** 2026-04-05 05:33:40.694286 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694296 | orchestrator | 2026-04-05 05:33:40.694305 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:33:40.694315 | orchestrator | Sunday 05 April 2026 05:33:27 +0000 (0:00:01.255) 0:20:03.815 ********** 2026-04-05 05:33:40.694324 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694334 | orchestrator | 2026-04-05 05:33:40.694343 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:33:40.694353 | orchestrator | Sunday 05 April 2026 05:33:28 +0000 (0:00:01.107) 0:20:04.922 ********** 2026-04-05 05:33:40.694362 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694372 | orchestrator | 2026-04-05 05:33:40.694381 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:33:40.694392 | orchestrator | Sunday 05 April 2026 05:33:29 +0000 (0:00:01.129) 0:20:06.052 ********** 2026-04-05 05:33:40.694422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694431 | orchestrator | 2026-04-05 05:33:40.694441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:33:40.694450 | orchestrator | Sunday 05 April 2026 05:33:30 +0000 (0:00:01.177) 0:20:07.229 ********** 2026-04-05 05:33:40.694460 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694469 | orchestrator | 2026-04-05 05:33:40.694478 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:33:40.694488 | orchestrator | Sunday 05 April 2026 05:33:31 +0000 (0:00:01.144) 0:20:08.373 ********** 2026-04-05 05:33:40.694497 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694507 | orchestrator | 2026-04-05 05:33:40.694516 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:33:40.694526 | orchestrator | Sunday 05 April 2026 05:33:32 +0000 (0:00:01.155) 0:20:09.529 ********** 2026-04-05 05:33:40.694535 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694545 | orchestrator | 2026-04-05 05:33:40.694554 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:33:40.694563 | orchestrator | Sunday 05 April 2026 05:33:34 +0000 (0:00:01.216) 0:20:10.746 ********** 2026-04-05 05:33:40.694573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:33:40.694583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:33:40.694592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:33:40.694602 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694611 | orchestrator | 2026-04-05 05:33:40.694621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:33:40.694631 | orchestrator | Sunday 05 April 2026 05:33:35 +0000 (0:00:01.831) 0:20:12.577 ********** 2026-04-05 05:33:40.694640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:33:40.694650 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:33:40.694659 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:33:40.694669 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694678 | orchestrator | 2026-04-05 05:33:40.694688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:33:40.694698 | orchestrator | Sunday 05 April 2026 05:33:37 +0000 (0:00:02.037) 0:20:14.615 ********** 2026-04-05 05:33:40.694707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:33:40.694717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:33:40.694726 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:33:40.694736 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694745 | orchestrator | 2026-04-05 05:33:40.694755 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:33:40.694764 | orchestrator | Sunday 05 April 2026 05:33:39 +0000 (0:00:01.416) 0:20:16.031 ********** 2026-04-05 05:33:40.694774 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:33:40.694783 | orchestrator | 2026-04-05 05:33:40.694792 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:33:40.694802 | orchestrator | Sunday 05 April 2026 05:33:40 +0000 (0:00:01.177) 0:20:17.208 ********** 2026-04-05 05:33:40.694812 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-05 05:33:40.694829 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:34:14.426385 | orchestrator | 2026-04-05 05:34:14.426520 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:34:14.426538 | orchestrator | Sunday 05 April 2026 05:33:41 +0000 (0:00:01.292) 0:20:18.501 ********** 2026-04-05 05:34:14.426551 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:34:14.426564 | orchestrator | 2026-04-05 05:34:14.426575 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:34:14.426587 | orchestrator | Sunday 05 April 2026 05:33:42 +0000 (0:00:01.184) 0:20:19.685 ********** 2026-04-05 05:34:14.427341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:34:14.427361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:34:14.427374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:34:14.427385 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:34:14.427396 | orchestrator | 2026-04-05 05:34:14.427407 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 05:34:14.427418 | orchestrator | Sunday 05 April 2026 05:33:44 +0000 (0:00:01.423) 0:20:21.109 ********** 2026-04-05 05:34:14.427429 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:34:14.427439 | orchestrator | 2026-04-05 05:34:14.427450 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 05:34:14.427461 | orchestrator | Sunday 05 April 2026 05:33:45 +0000 (0:00:01.162) 0:20:22.271 ********** 2026-04-05 05:34:14.427472 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:34:14.427483 | orchestrator | 2026-04-05 05:34:14.427506 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 05:34:14.427517 | orchestrator | Sunday 05 April 2026 05:33:46 +0000 (0:00:01.118) 0:20:23.390 ********** 2026-04-05 05:34:14.427528 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:34:14.427539 | orchestrator | 2026-04-05 05:34:14.427550 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 05:34:14.427560 | orchestrator | Sunday 05 April 2026 05:33:47 +0000 (0:00:01.120) 0:20:24.510 ********** 2026-04-05 05:34:14.427571 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:34:14.427581 | orchestrator | 2026-04-05 05:34:14.427592 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-05 05:34:14.427603 | orchestrator | 2026-04-05 05:34:14.427614 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-05 05:34:14.427625 | orchestrator | Sunday 05 April 2026 05:33:48 +0000 (0:00:00.976) 0:20:25.486 ********** 2026-04-05 05:34:14.427635 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427646 | orchestrator | 2026-04-05 05:34:14.427656 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:34:14.427667 | orchestrator | Sunday 05 April 2026 05:33:49 +0000 (0:00:00.867) 0:20:26.354 ********** 2026-04-05 05:34:14.427678 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427688 | orchestrator | 2026-04-05 05:34:14.427699 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:34:14.427710 | orchestrator | Sunday 05 April 2026 05:33:50 +0000 (0:00:00.795) 0:20:27.149 ********** 2026-04-05 05:34:14.427720 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427731 | orchestrator | 2026-04-05 05:34:14.427742 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:34:14.427752 | orchestrator | Sunday 05 April 2026 05:33:51 +0000 (0:00:00.751) 0:20:27.900 ********** 2026-04-05 05:34:14.427763 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427773 | orchestrator | 2026-04-05 05:34:14.427784 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:34:14.427795 | orchestrator | Sunday 05 April 2026 05:33:51 +0000 (0:00:00.791) 0:20:28.692 ********** 2026-04-05 05:34:14.427806 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427816 | orchestrator | 2026-04-05 05:34:14.427827 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:34:14.427837 | orchestrator | Sunday 05 April 2026 05:33:52 +0000 (0:00:00.787) 0:20:29.479 ********** 2026-04-05 05:34:14.427848 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427859 | orchestrator | 2026-04-05 05:34:14.427869 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:34:14.427880 | orchestrator | Sunday 05 April 2026 05:33:53 +0000 (0:00:00.763) 0:20:30.243 ********** 2026-04-05 05:34:14.427891 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427901 | orchestrator | 2026-04-05 05:34:14.427919 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:34:14.427930 | orchestrator | Sunday 05 April 2026 05:33:54 +0000 (0:00:00.787) 0:20:31.031 ********** 2026-04-05 05:34:14.427941 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427951 | orchestrator | 2026-04-05 05:34:14.427963 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:34:14.427973 | orchestrator | Sunday 05 April 2026 05:33:55 +0000 (0:00:00.803) 0:20:31.834 ********** 2026-04-05 05:34:14.427984 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.427995 | orchestrator | 2026-04-05 05:34:14.428006 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:34:14.428017 | orchestrator | Sunday 05 April 2026 05:33:55 +0000 (0:00:00.785) 0:20:32.619 ********** 2026-04-05 05:34:14.428027 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428041 | orchestrator | 2026-04-05 05:34:14.428062 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:34:14.428082 | orchestrator | Sunday 05 April 2026 05:33:56 +0000 (0:00:00.758) 0:20:33.378 ********** 2026-04-05 05:34:14.428101 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428120 | orchestrator | 2026-04-05 05:34:14.428139 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:34:14.428160 | orchestrator | Sunday 05 April 2026 05:33:57 +0000 (0:00:00.835) 0:20:34.214 ********** 2026-04-05 05:34:14.428178 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428249 | orchestrator | 2026-04-05 05:34:14.428269 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:34:14.428281 | orchestrator | Sunday 05 April 2026 05:33:58 +0000 (0:00:00.908) 0:20:35.122 ********** 2026-04-05 05:34:14.428292 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428303 | orchestrator | 2026-04-05 05:34:14.428335 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:34:14.428346 | orchestrator | Sunday 05 April 2026 05:33:59 +0000 (0:00:00.787) 0:20:35.910 ********** 2026-04-05 05:34:14.428357 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428368 | orchestrator | 2026-04-05 05:34:14.428379 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:34:14.428389 | orchestrator | Sunday 05 April 2026 05:33:59 +0000 (0:00:00.783) 0:20:36.694 ********** 2026-04-05 05:34:14.428400 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428410 | orchestrator | 2026-04-05 05:34:14.428421 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:34:14.428431 | orchestrator | Sunday 05 April 2026 05:34:00 +0000 (0:00:00.846) 0:20:37.540 ********** 2026-04-05 05:34:14.428442 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428452 | orchestrator | 2026-04-05 05:34:14.428463 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:34:14.428474 | orchestrator | Sunday 05 April 2026 05:34:01 +0000 (0:00:00.759) 0:20:38.299 ********** 2026-04-05 05:34:14.428484 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428495 | orchestrator | 2026-04-05 05:34:14.428505 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:34:14.428516 | orchestrator | Sunday 05 April 2026 05:34:02 +0000 (0:00:00.770) 0:20:39.070 ********** 2026-04-05 05:34:14.428534 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428545 | orchestrator | 2026-04-05 05:34:14.428556 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:34:14.428566 | orchestrator | Sunday 05 April 2026 05:34:03 +0000 (0:00:00.766) 0:20:39.837 ********** 2026-04-05 05:34:14.428577 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428588 | orchestrator | 2026-04-05 05:34:14.428598 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:34:14.428610 | orchestrator | Sunday 05 April 2026 05:34:03 +0000 (0:00:00.831) 0:20:40.668 ********** 2026-04-05 05:34:14.428620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428640 | orchestrator | 2026-04-05 05:34:14.428651 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:34:14.428661 | orchestrator | Sunday 05 April 2026 05:34:04 +0000 (0:00:00.778) 0:20:41.446 ********** 2026-04-05 05:34:14.428672 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428682 | orchestrator | 2026-04-05 05:34:14.428693 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:34:14.428704 | orchestrator | Sunday 05 April 2026 05:34:05 +0000 (0:00:00.800) 0:20:42.247 ********** 2026-04-05 05:34:14.428714 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428725 | orchestrator | 2026-04-05 05:34:14.428736 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:34:14.428746 | orchestrator | Sunday 05 April 2026 05:34:06 +0000 (0:00:00.770) 0:20:43.017 ********** 2026-04-05 05:34:14.428757 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428767 | orchestrator | 2026-04-05 05:34:14.428778 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:34:14.428788 | orchestrator | Sunday 05 April 2026 05:34:07 +0000 (0:00:00.775) 0:20:43.793 ********** 2026-04-05 05:34:14.428799 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428809 | orchestrator | 2026-04-05 05:34:14.428820 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:34:14.428831 | orchestrator | Sunday 05 April 2026 05:34:07 +0000 (0:00:00.913) 0:20:44.706 ********** 2026-04-05 05:34:14.428841 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428852 | orchestrator | 2026-04-05 05:34:14.428862 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:34:14.428873 | orchestrator | Sunday 05 April 2026 05:34:08 +0000 (0:00:00.771) 0:20:45.478 ********** 2026-04-05 05:34:14.428883 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428894 | orchestrator | 2026-04-05 05:34:14.428905 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:34:14.428915 | orchestrator | Sunday 05 April 2026 05:34:09 +0000 (0:00:00.792) 0:20:46.270 ********** 2026-04-05 05:34:14.428926 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428936 | orchestrator | 2026-04-05 05:34:14.428947 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:34:14.428957 | orchestrator | Sunday 05 April 2026 05:34:10 +0000 (0:00:00.776) 0:20:47.047 ********** 2026-04-05 05:34:14.428968 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.428978 | orchestrator | 2026-04-05 05:34:14.428989 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:34:14.429000 | orchestrator | Sunday 05 April 2026 05:34:11 +0000 (0:00:00.805) 0:20:47.852 ********** 2026-04-05 05:34:14.429010 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.429021 | orchestrator | 2026-04-05 05:34:14.429031 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:34:14.429042 | orchestrator | Sunday 05 April 2026 05:34:11 +0000 (0:00:00.756) 0:20:48.608 ********** 2026-04-05 05:34:14.429053 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.429064 | orchestrator | 2026-04-05 05:34:14.429075 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:34:14.429085 | orchestrator | Sunday 05 April 2026 05:34:12 +0000 (0:00:00.761) 0:20:49.370 ********** 2026-04-05 05:34:14.429096 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.429106 | orchestrator | 2026-04-05 05:34:14.429117 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:34:14.429128 | orchestrator | Sunday 05 April 2026 05:34:13 +0000 (0:00:00.790) 0:20:50.160 ********** 2026-04-05 05:34:14.429138 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.429149 | orchestrator | 2026-04-05 05:34:14.429159 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:34:14.429170 | orchestrator | Sunday 05 April 2026 05:34:14 +0000 (0:00:00.826) 0:20:50.987 ********** 2026-04-05 05:34:14.429180 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:14.429218 | orchestrator | 2026-04-05 05:34:14.429236 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:34:45.024477 | orchestrator | Sunday 05 April 2026 05:34:15 +0000 (0:00:00.766) 0:20:51.753 ********** 2026-04-05 05:34:45.024619 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.024644 | orchestrator | 2026-04-05 05:34:45.024662 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:34:45.024679 | orchestrator | Sunday 05 April 2026 05:34:15 +0000 (0:00:00.778) 0:20:52.532 ********** 2026-04-05 05:34:45.024694 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.024711 | orchestrator | 2026-04-05 05:34:45.024727 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:34:45.024743 | orchestrator | Sunday 05 April 2026 05:34:16 +0000 (0:00:00.758) 0:20:53.291 ********** 2026-04-05 05:34:45.024759 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.024776 | orchestrator | 2026-04-05 05:34:45.024791 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:34:45.024808 | orchestrator | Sunday 05 April 2026 05:34:17 +0000 (0:00:00.800) 0:20:54.091 ********** 2026-04-05 05:34:45.024824 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.024840 | orchestrator | 2026-04-05 05:34:45.024856 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:34:45.024873 | orchestrator | Sunday 05 April 2026 05:34:18 +0000 (0:00:00.873) 0:20:54.965 ********** 2026-04-05 05:34:45.024889 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.024903 | orchestrator | 2026-04-05 05:34:45.024939 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:34:45.024957 | orchestrator | Sunday 05 April 2026 05:34:19 +0000 (0:00:00.796) 0:20:55.761 ********** 2026-04-05 05:34:45.024974 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.024991 | orchestrator | 2026-04-05 05:34:45.025008 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:34:45.025027 | orchestrator | Sunday 05 April 2026 05:34:19 +0000 (0:00:00.761) 0:20:56.523 ********** 2026-04-05 05:34:45.025039 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025051 | orchestrator | 2026-04-05 05:34:45.025063 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:34:45.025075 | orchestrator | Sunday 05 April 2026 05:34:20 +0000 (0:00:00.785) 0:20:57.309 ********** 2026-04-05 05:34:45.025086 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025098 | orchestrator | 2026-04-05 05:34:45.025110 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:34:45.025122 | orchestrator | Sunday 05 April 2026 05:34:21 +0000 (0:00:00.828) 0:20:58.137 ********** 2026-04-05 05:34:45.025134 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025145 | orchestrator | 2026-04-05 05:34:45.025156 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:34:45.025166 | orchestrator | Sunday 05 April 2026 05:34:22 +0000 (0:00:00.764) 0:20:58.902 ********** 2026-04-05 05:34:45.025175 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025185 | orchestrator | 2026-04-05 05:34:45.025195 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:34:45.025204 | orchestrator | Sunday 05 April 2026 05:34:22 +0000 (0:00:00.781) 0:20:59.683 ********** 2026-04-05 05:34:45.025214 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025224 | orchestrator | 2026-04-05 05:34:45.025259 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:34:45.025268 | orchestrator | Sunday 05 April 2026 05:34:23 +0000 (0:00:00.818) 0:21:00.502 ********** 2026-04-05 05:34:45.025278 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025288 | orchestrator | 2026-04-05 05:34:45.025297 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:34:45.025307 | orchestrator | Sunday 05 April 2026 05:34:24 +0000 (0:00:00.780) 0:21:01.282 ********** 2026-04-05 05:34:45.025341 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025351 | orchestrator | 2026-04-05 05:34:45.025361 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:34:45.025370 | orchestrator | Sunday 05 April 2026 05:34:25 +0000 (0:00:00.898) 0:21:02.181 ********** 2026-04-05 05:34:45.025380 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025389 | orchestrator | 2026-04-05 05:34:45.025399 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:34:45.025409 | orchestrator | Sunday 05 April 2026 05:34:26 +0000 (0:00:00.804) 0:21:02.985 ********** 2026-04-05 05:34:45.025418 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025428 | orchestrator | 2026-04-05 05:34:45.025437 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:34:45.025447 | orchestrator | Sunday 05 April 2026 05:34:27 +0000 (0:00:00.903) 0:21:03.889 ********** 2026-04-05 05:34:45.025456 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025466 | orchestrator | 2026-04-05 05:34:45.025475 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:34:45.025485 | orchestrator | Sunday 05 April 2026 05:34:27 +0000 (0:00:00.749) 0:21:04.638 ********** 2026-04-05 05:34:45.025495 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025504 | orchestrator | 2026-04-05 05:34:45.025515 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:34:45.025526 | orchestrator | Sunday 05 April 2026 05:34:28 +0000 (0:00:00.848) 0:21:05.486 ********** 2026-04-05 05:34:45.025536 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025545 | orchestrator | 2026-04-05 05:34:45.025555 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:34:45.025565 | orchestrator | Sunday 05 April 2026 05:34:29 +0000 (0:00:00.802) 0:21:06.288 ********** 2026-04-05 05:34:45.025574 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025584 | orchestrator | 2026-04-05 05:34:45.025593 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:34:45.025603 | orchestrator | Sunday 05 April 2026 05:34:30 +0000 (0:00:00.788) 0:21:07.077 ********** 2026-04-05 05:34:45.025613 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025623 | orchestrator | 2026-04-05 05:34:45.025653 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:34:45.025664 | orchestrator | Sunday 05 April 2026 05:34:31 +0000 (0:00:00.768) 0:21:07.846 ********** 2026-04-05 05:34:45.025673 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025683 | orchestrator | 2026-04-05 05:34:45.025693 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:34:45.025702 | orchestrator | Sunday 05 April 2026 05:34:31 +0000 (0:00:00.778) 0:21:08.624 ********** 2026-04-05 05:34:45.025712 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:34:45.025722 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:34:45.025731 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:34:45.025741 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025750 | orchestrator | 2026-04-05 05:34:45.025760 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:34:45.025769 | orchestrator | Sunday 05 April 2026 05:34:33 +0000 (0:00:01.154) 0:21:09.779 ********** 2026-04-05 05:34:45.025779 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:34:45.025788 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:34:45.025798 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:34:45.025814 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025823 | orchestrator | 2026-04-05 05:34:45.025833 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:34:45.025842 | orchestrator | Sunday 05 April 2026 05:34:34 +0000 (0:00:01.067) 0:21:10.846 ********** 2026-04-05 05:34:45.025863 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:34:45.025873 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:34:45.025883 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:34:45.025892 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025902 | orchestrator | 2026-04-05 05:34:45.025911 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:34:45.025921 | orchestrator | Sunday 05 April 2026 05:34:35 +0000 (0:00:01.043) 0:21:11.889 ********** 2026-04-05 05:34:45.025930 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025940 | orchestrator | 2026-04-05 05:34:45.025949 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:34:45.025959 | orchestrator | Sunday 05 April 2026 05:34:35 +0000 (0:00:00.779) 0:21:12.669 ********** 2026-04-05 05:34:45.025969 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-05 05:34:45.025979 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.025988 | orchestrator | 2026-04-05 05:34:45.025998 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:34:45.026007 | orchestrator | Sunday 05 April 2026 05:34:36 +0000 (0:00:00.939) 0:21:13.608 ********** 2026-04-05 05:34:45.026076 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.026087 | orchestrator | 2026-04-05 05:34:45.026096 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:34:45.026106 | orchestrator | Sunday 05 April 2026 05:34:37 +0000 (0:00:00.764) 0:21:14.373 ********** 2026-04-05 05:34:45.026116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:34:45.026160 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:34:45.026172 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:34:45.026181 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.026191 | orchestrator | 2026-04-05 05:34:45.026201 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 05:34:45.026210 | orchestrator | Sunday 05 April 2026 05:34:39 +0000 (0:00:01.583) 0:21:15.957 ********** 2026-04-05 05:34:45.026220 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.026246 | orchestrator | 2026-04-05 05:34:45.026256 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 05:34:45.026265 | orchestrator | Sunday 05 April 2026 05:34:40 +0000 (0:00:00.793) 0:21:16.751 ********** 2026-04-05 05:34:45.026275 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.026285 | orchestrator | 2026-04-05 05:34:45.026295 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 05:34:45.026304 | orchestrator | Sunday 05 April 2026 05:34:40 +0000 (0:00:00.759) 0:21:17.510 ********** 2026-04-05 05:34:45.026314 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.026324 | orchestrator | 2026-04-05 05:34:45.026333 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 05:34:45.026343 | orchestrator | Sunday 05 April 2026 05:34:41 +0000 (0:00:00.781) 0:21:18.292 ********** 2026-04-05 05:34:45.026353 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:34:45.026362 | orchestrator | 2026-04-05 05:34:45.026372 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-05 05:34:45.026381 | orchestrator | 2026-04-05 05:34:45.026391 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-05 05:34:45.026401 | orchestrator | Sunday 05 April 2026 05:34:42 +0000 (0:00:00.964) 0:21:19.257 ********** 2026-04-05 05:34:45.026411 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:34:45.026420 | orchestrator | 2026-04-05 05:34:45.026430 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:34:45.026440 | orchestrator | Sunday 05 April 2026 05:34:43 +0000 (0:00:00.779) 0:21:20.037 ********** 2026-04-05 05:34:45.026450 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:34:45.026467 | orchestrator | 2026-04-05 05:34:45.026477 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:34:45.026487 | orchestrator | Sunday 05 April 2026 05:34:44 +0000 (0:00:00.774) 0:21:20.812 ********** 2026-04-05 05:34:45.026497 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:34:45.026506 | orchestrator | 2026-04-05 05:34:45.026516 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:34:45.026527 | orchestrator | Sunday 05 April 2026 05:34:44 +0000 (0:00:00.861) 0:21:21.673 ********** 2026-04-05 05:34:45.026545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878307 | orchestrator | 2026-04-05 05:35:16.878427 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:35:16.878446 | orchestrator | Sunday 05 April 2026 05:34:45 +0000 (0:00:00.770) 0:21:22.444 ********** 2026-04-05 05:35:16.878458 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878471 | orchestrator | 2026-04-05 05:35:16.878483 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:35:16.878494 | orchestrator | Sunday 05 April 2026 05:34:46 +0000 (0:00:00.760) 0:21:23.204 ********** 2026-04-05 05:35:16.878505 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878516 | orchestrator | 2026-04-05 05:35:16.878527 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:35:16.878538 | orchestrator | Sunday 05 April 2026 05:34:47 +0000 (0:00:00.831) 0:21:24.036 ********** 2026-04-05 05:35:16.878549 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878560 | orchestrator | 2026-04-05 05:35:16.878570 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:35:16.878581 | orchestrator | Sunday 05 April 2026 05:34:48 +0000 (0:00:00.761) 0:21:24.797 ********** 2026-04-05 05:35:16.878592 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878603 | orchestrator | 2026-04-05 05:35:16.878613 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:35:16.878645 | orchestrator | Sunday 05 April 2026 05:34:48 +0000 (0:00:00.798) 0:21:25.596 ********** 2026-04-05 05:35:16.878659 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878671 | orchestrator | 2026-04-05 05:35:16.878684 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:35:16.878697 | orchestrator | Sunday 05 April 2026 05:34:49 +0000 (0:00:00.771) 0:21:26.368 ********** 2026-04-05 05:35:16.878709 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878722 | orchestrator | 2026-04-05 05:35:16.878734 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:35:16.878746 | orchestrator | Sunday 05 April 2026 05:34:50 +0000 (0:00:00.847) 0:21:27.215 ********** 2026-04-05 05:35:16.878759 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878773 | orchestrator | 2026-04-05 05:35:16.878785 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:35:16.878797 | orchestrator | Sunday 05 April 2026 05:34:51 +0000 (0:00:00.789) 0:21:28.005 ********** 2026-04-05 05:35:16.878810 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878823 | orchestrator | 2026-04-05 05:35:16.878835 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:35:16.878848 | orchestrator | Sunday 05 April 2026 05:34:52 +0000 (0:00:00.765) 0:21:28.771 ********** 2026-04-05 05:35:16.878860 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878873 | orchestrator | 2026-04-05 05:35:16.878886 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:35:16.878898 | orchestrator | Sunday 05 April 2026 05:34:52 +0000 (0:00:00.779) 0:21:29.550 ********** 2026-04-05 05:35:16.878911 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878924 | orchestrator | 2026-04-05 05:35:16.878936 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:35:16.878948 | orchestrator | Sunday 05 April 2026 05:34:53 +0000 (0:00:00.814) 0:21:30.365 ********** 2026-04-05 05:35:16.878961 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.878997 | orchestrator | 2026-04-05 05:35:16.879012 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:35:16.879025 | orchestrator | Sunday 05 April 2026 05:34:54 +0000 (0:00:00.778) 0:21:31.144 ********** 2026-04-05 05:35:16.879036 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879047 | orchestrator | 2026-04-05 05:35:16.879058 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:35:16.879069 | orchestrator | Sunday 05 April 2026 05:34:55 +0000 (0:00:00.823) 0:21:31.968 ********** 2026-04-05 05:35:16.879080 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879091 | orchestrator | 2026-04-05 05:35:16.879102 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:35:16.879112 | orchestrator | Sunday 05 April 2026 05:34:56 +0000 (0:00:00.792) 0:21:32.760 ********** 2026-04-05 05:35:16.879123 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879134 | orchestrator | 2026-04-05 05:35:16.879145 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:35:16.879155 | orchestrator | Sunday 05 April 2026 05:34:56 +0000 (0:00:00.785) 0:21:33.545 ********** 2026-04-05 05:35:16.879166 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879177 | orchestrator | 2026-04-05 05:35:16.879188 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:35:16.879200 | orchestrator | Sunday 05 April 2026 05:34:57 +0000 (0:00:00.866) 0:21:34.411 ********** 2026-04-05 05:35:16.879211 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879221 | orchestrator | 2026-04-05 05:35:16.879232 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:35:16.879243 | orchestrator | Sunday 05 April 2026 05:34:58 +0000 (0:00:00.801) 0:21:35.213 ********** 2026-04-05 05:35:16.879254 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879319 | orchestrator | 2026-04-05 05:35:16.879332 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:35:16.879343 | orchestrator | Sunday 05 April 2026 05:34:59 +0000 (0:00:00.806) 0:21:36.020 ********** 2026-04-05 05:35:16.879354 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879365 | orchestrator | 2026-04-05 05:35:16.879375 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:35:16.879386 | orchestrator | Sunday 05 April 2026 05:35:00 +0000 (0:00:00.781) 0:21:36.802 ********** 2026-04-05 05:35:16.879397 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879408 | orchestrator | 2026-04-05 05:35:16.879419 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:35:16.879430 | orchestrator | Sunday 05 April 2026 05:35:00 +0000 (0:00:00.779) 0:21:37.581 ********** 2026-04-05 05:35:16.879441 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879452 | orchestrator | 2026-04-05 05:35:16.879482 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:35:16.879493 | orchestrator | Sunday 05 April 2026 05:35:01 +0000 (0:00:00.772) 0:21:38.354 ********** 2026-04-05 05:35:16.879504 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879515 | orchestrator | 2026-04-05 05:35:16.879526 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:35:16.879537 | orchestrator | Sunday 05 April 2026 05:35:02 +0000 (0:00:00.794) 0:21:39.149 ********** 2026-04-05 05:35:16.879548 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879558 | orchestrator | 2026-04-05 05:35:16.879569 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:35:16.879580 | orchestrator | Sunday 05 April 2026 05:35:03 +0000 (0:00:00.787) 0:21:39.937 ********** 2026-04-05 05:35:16.879590 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879601 | orchestrator | 2026-04-05 05:35:16.879612 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:35:16.879622 | orchestrator | Sunday 05 April 2026 05:35:03 +0000 (0:00:00.759) 0:21:40.697 ********** 2026-04-05 05:35:16.879642 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879653 | orchestrator | 2026-04-05 05:35:16.879664 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:35:16.879675 | orchestrator | Sunday 05 April 2026 05:35:04 +0000 (0:00:00.776) 0:21:41.474 ********** 2026-04-05 05:35:16.879686 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879696 | orchestrator | 2026-04-05 05:35:16.879714 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:35:16.879725 | orchestrator | Sunday 05 April 2026 05:35:05 +0000 (0:00:00.800) 0:21:42.274 ********** 2026-04-05 05:35:16.879736 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879747 | orchestrator | 2026-04-05 05:35:16.879757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:35:16.879768 | orchestrator | Sunday 05 April 2026 05:35:06 +0000 (0:00:00.766) 0:21:43.041 ********** 2026-04-05 05:35:16.879779 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879790 | orchestrator | 2026-04-05 05:35:16.879800 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:35:16.879811 | orchestrator | Sunday 05 April 2026 05:35:07 +0000 (0:00:00.769) 0:21:43.811 ********** 2026-04-05 05:35:16.879822 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879833 | orchestrator | 2026-04-05 05:35:16.879843 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:35:16.879854 | orchestrator | Sunday 05 April 2026 05:35:07 +0000 (0:00:00.758) 0:21:44.569 ********** 2026-04-05 05:35:16.879865 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879876 | orchestrator | 2026-04-05 05:35:16.879886 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:35:16.879897 | orchestrator | Sunday 05 April 2026 05:35:08 +0000 (0:00:00.759) 0:21:45.329 ********** 2026-04-05 05:35:16.879908 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879918 | orchestrator | 2026-04-05 05:35:16.879929 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:35:16.879940 | orchestrator | Sunday 05 April 2026 05:35:09 +0000 (0:00:00.813) 0:21:46.142 ********** 2026-04-05 05:35:16.879950 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.879961 | orchestrator | 2026-04-05 05:35:16.879972 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:35:16.879983 | orchestrator | Sunday 05 April 2026 05:35:10 +0000 (0:00:00.787) 0:21:46.929 ********** 2026-04-05 05:35:16.879993 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880004 | orchestrator | 2026-04-05 05:35:16.880015 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:35:16.880026 | orchestrator | Sunday 05 April 2026 05:35:11 +0000 (0:00:00.810) 0:21:47.739 ********** 2026-04-05 05:35:16.880037 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880047 | orchestrator | 2026-04-05 05:35:16.880058 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:35:16.880069 | orchestrator | Sunday 05 April 2026 05:35:11 +0000 (0:00:00.768) 0:21:48.508 ********** 2026-04-05 05:35:16.880080 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880090 | orchestrator | 2026-04-05 05:35:16.880101 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:35:16.880112 | orchestrator | Sunday 05 April 2026 05:35:12 +0000 (0:00:00.760) 0:21:49.268 ********** 2026-04-05 05:35:16.880122 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880133 | orchestrator | 2026-04-05 05:35:16.880144 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:35:16.880156 | orchestrator | Sunday 05 April 2026 05:35:13 +0000 (0:00:00.836) 0:21:50.105 ********** 2026-04-05 05:35:16.880167 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880177 | orchestrator | 2026-04-05 05:35:16.880188 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:35:16.880199 | orchestrator | Sunday 05 April 2026 05:35:14 +0000 (0:00:00.790) 0:21:50.895 ********** 2026-04-05 05:35:16.880216 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880227 | orchestrator | 2026-04-05 05:35:16.880238 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:35:16.880249 | orchestrator | Sunday 05 April 2026 05:35:15 +0000 (0:00:00.863) 0:21:51.759 ********** 2026-04-05 05:35:16.880260 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880289 | orchestrator | 2026-04-05 05:35:16.880300 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:35:16.880311 | orchestrator | Sunday 05 April 2026 05:35:15 +0000 (0:00:00.818) 0:21:52.577 ********** 2026-04-05 05:35:16.880321 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880332 | orchestrator | 2026-04-05 05:35:16.880343 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:35:16.880354 | orchestrator | Sunday 05 April 2026 05:35:16 +0000 (0:00:00.851) 0:21:53.429 ********** 2026-04-05 05:35:16.880365 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:16.880375 | orchestrator | 2026-04-05 05:35:16.880393 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:35:58.940511 | orchestrator | Sunday 05 April 2026 05:35:17 +0000 (0:00:00.785) 0:21:54.214 ********** 2026-04-05 05:35:58.940627 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940643 | orchestrator | 2026-04-05 05:35:58.940655 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:35:58.940667 | orchestrator | Sunday 05 April 2026 05:35:18 +0000 (0:00:00.796) 0:21:55.011 ********** 2026-04-05 05:35:58.940678 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940689 | orchestrator | 2026-04-05 05:35:58.940700 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:35:58.940710 | orchestrator | Sunday 05 April 2026 05:35:19 +0000 (0:00:00.861) 0:21:55.873 ********** 2026-04-05 05:35:58.940721 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940732 | orchestrator | 2026-04-05 05:35:58.940742 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:35:58.940753 | orchestrator | Sunday 05 April 2026 05:35:19 +0000 (0:00:00.795) 0:21:56.668 ********** 2026-04-05 05:35:58.940764 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940775 | orchestrator | 2026-04-05 05:35:58.940786 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:35:58.940797 | orchestrator | Sunday 05 April 2026 05:35:20 +0000 (0:00:00.890) 0:21:57.559 ********** 2026-04-05 05:35:58.940824 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940836 | orchestrator | 2026-04-05 05:35:58.940847 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:35:58.940858 | orchestrator | Sunday 05 April 2026 05:35:21 +0000 (0:00:00.780) 0:21:58.339 ********** 2026-04-05 05:35:58.940869 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940880 | orchestrator | 2026-04-05 05:35:58.940891 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:35:58.940903 | orchestrator | Sunday 05 April 2026 05:35:22 +0000 (0:00:00.753) 0:21:59.093 ********** 2026-04-05 05:35:58.940914 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940925 | orchestrator | 2026-04-05 05:35:58.940936 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:35:58.940947 | orchestrator | Sunday 05 April 2026 05:35:23 +0000 (0:00:00.791) 0:21:59.884 ********** 2026-04-05 05:35:58.940958 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.940968 | orchestrator | 2026-04-05 05:35:58.940979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:35:58.940990 | orchestrator | Sunday 05 April 2026 05:35:23 +0000 (0:00:00.774) 0:22:00.659 ********** 2026-04-05 05:35:58.941001 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941012 | orchestrator | 2026-04-05 05:35:58.941044 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:35:58.941057 | orchestrator | Sunday 05 April 2026 05:35:24 +0000 (0:00:00.789) 0:22:01.449 ********** 2026-04-05 05:35:58.941069 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941081 | orchestrator | 2026-04-05 05:35:58.941094 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:35:58.941107 | orchestrator | Sunday 05 April 2026 05:35:25 +0000 (0:00:00.772) 0:22:02.221 ********** 2026-04-05 05:35:58.941120 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:35:58.941133 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:35:58.941143 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:35:58.941154 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941165 | orchestrator | 2026-04-05 05:35:58.941175 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:35:58.941186 | orchestrator | Sunday 05 April 2026 05:35:26 +0000 (0:00:01.428) 0:22:03.650 ********** 2026-04-05 05:35:58.941197 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:35:58.941208 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:35:58.941219 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:35:58.941230 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941241 | orchestrator | 2026-04-05 05:35:58.941251 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:35:58.941262 | orchestrator | Sunday 05 April 2026 05:35:28 +0000 (0:00:01.427) 0:22:05.077 ********** 2026-04-05 05:35:58.941273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:35:58.941284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:35:58.941295 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:35:58.941306 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941337 | orchestrator | 2026-04-05 05:35:58.941349 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:35:58.941360 | orchestrator | Sunday 05 April 2026 05:35:29 +0000 (0:00:01.560) 0:22:06.638 ********** 2026-04-05 05:35:58.941371 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941381 | orchestrator | 2026-04-05 05:35:58.941392 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:35:58.941403 | orchestrator | Sunday 05 April 2026 05:35:30 +0000 (0:00:00.809) 0:22:07.448 ********** 2026-04-05 05:35:58.941414 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-05 05:35:58.941425 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941435 | orchestrator | 2026-04-05 05:35:58.941446 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:35:58.941457 | orchestrator | Sunday 05 April 2026 05:35:31 +0000 (0:00:00.962) 0:22:08.410 ********** 2026-04-05 05:35:58.941468 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941479 | orchestrator | 2026-04-05 05:35:58.941489 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:35:58.941500 | orchestrator | Sunday 05 April 2026 05:35:32 +0000 (0:00:00.782) 0:22:09.192 ********** 2026-04-05 05:35:58.941511 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:35:58.941539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:35:58.941551 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:35:58.941561 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941572 | orchestrator | 2026-04-05 05:35:58.941583 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 05:35:58.941594 | orchestrator | Sunday 05 April 2026 05:35:33 +0000 (0:00:01.094) 0:22:10.286 ********** 2026-04-05 05:35:58.941604 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941615 | orchestrator | 2026-04-05 05:35:58.941626 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 05:35:58.941644 | orchestrator | Sunday 05 April 2026 05:35:34 +0000 (0:00:00.820) 0:22:11.106 ********** 2026-04-05 05:35:58.941655 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941666 | orchestrator | 2026-04-05 05:35:58.941677 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 05:35:58.941688 | orchestrator | Sunday 05 April 2026 05:35:35 +0000 (0:00:00.811) 0:22:11.918 ********** 2026-04-05 05:35:58.941698 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941709 | orchestrator | 2026-04-05 05:35:58.941720 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 05:35:58.941731 | orchestrator | Sunday 05 April 2026 05:35:35 +0000 (0:00:00.776) 0:22:12.694 ********** 2026-04-05 05:35:58.941742 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:35:58.941752 | orchestrator | 2026-04-05 05:35:58.941769 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-05 05:35:58.941780 | orchestrator | 2026-04-05 05:35:58.941791 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-05 05:35:58.941801 | orchestrator | Sunday 05 April 2026 05:35:37 +0000 (0:00:01.346) 0:22:14.041 ********** 2026-04-05 05:35:58.941812 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:35:58.941823 | orchestrator | 2026-04-05 05:35:58.941833 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-05 05:35:58.941844 | orchestrator | Sunday 05 April 2026 05:35:40 +0000 (0:00:02.944) 0:22:16.986 ********** 2026-04-05 05:35:58.941855 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:35:58.941866 | orchestrator | 2026-04-05 05:35:58.941876 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:35:58.941887 | orchestrator | Sunday 05 April 2026 05:35:42 +0000 (0:00:02.459) 0:22:19.445 ********** 2026-04-05 05:35:58.941898 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-05 05:35:58.941909 | orchestrator | 2026-04-05 05:35:58.941919 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:35:58.941930 | orchestrator | Sunday 05 April 2026 05:35:43 +0000 (0:00:01.131) 0:22:20.577 ********** 2026-04-05 05:35:58.941941 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.941952 | orchestrator | 2026-04-05 05:35:58.941963 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:35:58.941974 | orchestrator | Sunday 05 April 2026 05:35:45 +0000 (0:00:01.501) 0:22:22.078 ********** 2026-04-05 05:35:58.941984 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.941995 | orchestrator | 2026-04-05 05:35:58.942006 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:35:58.942074 | orchestrator | Sunday 05 April 2026 05:35:46 +0000 (0:00:01.141) 0:22:23.219 ********** 2026-04-05 05:35:58.942087 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.942098 | orchestrator | 2026-04-05 05:35:58.942109 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:35:58.942120 | orchestrator | Sunday 05 April 2026 05:35:48 +0000 (0:00:01.499) 0:22:24.719 ********** 2026-04-05 05:35:58.942131 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.942142 | orchestrator | 2026-04-05 05:35:58.942153 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:35:58.942164 | orchestrator | Sunday 05 April 2026 05:35:49 +0000 (0:00:01.138) 0:22:25.858 ********** 2026-04-05 05:35:58.942174 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.942185 | orchestrator | 2026-04-05 05:35:58.942196 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:35:58.942207 | orchestrator | Sunday 05 April 2026 05:35:50 +0000 (0:00:01.168) 0:22:27.026 ********** 2026-04-05 05:35:58.942218 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.942229 | orchestrator | 2026-04-05 05:35:58.942240 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:35:58.942252 | orchestrator | Sunday 05 April 2026 05:35:51 +0000 (0:00:01.220) 0:22:28.246 ********** 2026-04-05 05:35:58.942270 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:35:58.942281 | orchestrator | 2026-04-05 05:35:58.942292 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:35:58.942303 | orchestrator | Sunday 05 April 2026 05:35:52 +0000 (0:00:01.160) 0:22:29.407 ********** 2026-04-05 05:35:58.942314 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.942342 | orchestrator | 2026-04-05 05:35:58.942352 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:35:58.942363 | orchestrator | Sunday 05 April 2026 05:35:53 +0000 (0:00:01.103) 0:22:30.510 ********** 2026-04-05 05:35:58.942374 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:35:58.942385 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:35:58.942396 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:35:58.942407 | orchestrator | 2026-04-05 05:35:58.942418 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:35:58.942429 | orchestrator | Sunday 05 April 2026 05:35:55 +0000 (0:00:02.103) 0:22:32.613 ********** 2026-04-05 05:35:58.942440 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:35:58.942451 | orchestrator | 2026-04-05 05:35:58.942461 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:35:58.942472 | orchestrator | Sunday 05 April 2026 05:35:57 +0000 (0:00:01.755) 0:22:34.369 ********** 2026-04-05 05:35:58.942483 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:35:58.942501 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:36:22.727304 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:36:22.727531 | orchestrator | 2026-04-05 05:36:22.727562 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:36:22.727584 | orchestrator | Sunday 05 April 2026 05:36:00 +0000 (0:00:03.040) 0:22:37.409 ********** 2026-04-05 05:36:22.727605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:36:22.727624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:36:22.727640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:36:22.727652 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.727663 | orchestrator | 2026-04-05 05:36:22.727675 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:36:22.727686 | orchestrator | Sunday 05 April 2026 05:36:02 +0000 (0:00:01.448) 0:22:38.858 ********** 2026-04-05 05:36:22.727698 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:36:22.727730 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:36:22.727743 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:36:22.727754 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.727765 | orchestrator | 2026-04-05 05:36:22.727776 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:36:22.727787 | orchestrator | Sunday 05 April 2026 05:36:03 +0000 (0:00:01.677) 0:22:40.535 ********** 2026-04-05 05:36:22.727800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:22.727840 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:22.727855 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:22.727868 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.727884 | orchestrator | 2026-04-05 05:36:22.727908 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:36:22.727935 | orchestrator | Sunday 05 April 2026 05:36:05 +0000 (0:00:01.202) 0:22:41.738 ********** 2026-04-05 05:36:22.728096 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:35:58.217580', 'end': '2026-04-05 05:35:58.269394', 'delta': '0:00:00.051814', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:36:22.728145 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:35:58.816922', 'end': '2026-04-05 05:35:58.876486', 'delta': '0:00:00.059564', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:36:22.728167 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:35:59.405845', 'end': '2026-04-05 05:35:59.458112', 'delta': '0:00:00.052267', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:36:22.728179 | orchestrator | 2026-04-05 05:36:22.728190 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:36:22.728201 | orchestrator | Sunday 05 April 2026 05:36:06 +0000 (0:00:01.208) 0:22:42.946 ********** 2026-04-05 05:36:22.728212 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:36:22.728236 | orchestrator | 2026-04-05 05:36:22.728247 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:36:22.728258 | orchestrator | Sunday 05 April 2026 05:36:07 +0000 (0:00:01.275) 0:22:44.222 ********** 2026-04-05 05:36:22.728268 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728279 | orchestrator | 2026-04-05 05:36:22.728289 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:36:22.728300 | orchestrator | Sunday 05 April 2026 05:36:08 +0000 (0:00:01.250) 0:22:45.472 ********** 2026-04-05 05:36:22.728311 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:36:22.728321 | orchestrator | 2026-04-05 05:36:22.728332 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:36:22.728343 | orchestrator | Sunday 05 April 2026 05:36:09 +0000 (0:00:01.156) 0:22:46.629 ********** 2026-04-05 05:36:22.728380 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:36:22.728390 | orchestrator | 2026-04-05 05:36:22.728401 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:36:22.728411 | orchestrator | Sunday 05 April 2026 05:36:12 +0000 (0:00:02.117) 0:22:48.747 ********** 2026-04-05 05:36:22.728422 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:36:22.728432 | orchestrator | 2026-04-05 05:36:22.728443 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:36:22.728454 | orchestrator | Sunday 05 April 2026 05:36:13 +0000 (0:00:01.162) 0:22:49.910 ********** 2026-04-05 05:36:22.728464 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728475 | orchestrator | 2026-04-05 05:36:22.728485 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:36:22.728496 | orchestrator | Sunday 05 April 2026 05:36:14 +0000 (0:00:01.134) 0:22:51.044 ********** 2026-04-05 05:36:22.728507 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728517 | orchestrator | 2026-04-05 05:36:22.728528 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:36:22.728539 | orchestrator | Sunday 05 April 2026 05:36:15 +0000 (0:00:01.293) 0:22:52.338 ********** 2026-04-05 05:36:22.728549 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728560 | orchestrator | 2026-04-05 05:36:22.728571 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:36:22.728581 | orchestrator | Sunday 05 April 2026 05:36:16 +0000 (0:00:01.302) 0:22:53.640 ********** 2026-04-05 05:36:22.728592 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728603 | orchestrator | 2026-04-05 05:36:22.728614 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:36:22.728624 | orchestrator | Sunday 05 April 2026 05:36:18 +0000 (0:00:01.152) 0:22:54.793 ********** 2026-04-05 05:36:22.728635 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728645 | orchestrator | 2026-04-05 05:36:22.728656 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:36:22.728667 | orchestrator | Sunday 05 April 2026 05:36:19 +0000 (0:00:01.108) 0:22:55.901 ********** 2026-04-05 05:36:22.728677 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728688 | orchestrator | 2026-04-05 05:36:22.728699 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:36:22.728709 | orchestrator | Sunday 05 April 2026 05:36:20 +0000 (0:00:01.133) 0:22:57.035 ********** 2026-04-05 05:36:22.728720 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728730 | orchestrator | 2026-04-05 05:36:22.728741 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:36:22.728751 | orchestrator | Sunday 05 April 2026 05:36:21 +0000 (0:00:01.118) 0:22:58.153 ********** 2026-04-05 05:36:22.728762 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728772 | orchestrator | 2026-04-05 05:36:22.728783 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:36:22.728794 | orchestrator | Sunday 05 April 2026 05:36:22 +0000 (0:00:01.116) 0:22:59.270 ********** 2026-04-05 05:36:22.728805 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:22.728822 | orchestrator | 2026-04-05 05:36:22.728840 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:36:25.027640 | orchestrator | Sunday 05 April 2026 05:36:23 +0000 (0:00:01.109) 0:23:00.379 ********** 2026-04-05 05:36:25.027763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.027816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.027839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.027854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:36:25.027877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.027905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.027924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.027985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:36:25.028034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.028048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:36:25.028060 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:25.028073 | orchestrator | 2026-04-05 05:36:25.028085 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:36:25.028097 | orchestrator | Sunday 05 April 2026 05:36:24 +0000 (0:00:01.262) 0:23:01.641 ********** 2026-04-05 05:36:25.028109 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:25.028122 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:25.028149 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408521 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408687 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408699 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408742 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408791 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408813 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:36:36.408835 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:36.408857 | orchestrator | 2026-04-05 05:36:36.408876 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:36:36.408898 | orchestrator | Sunday 05 April 2026 05:36:26 +0000 (0:00:01.263) 0:23:02.905 ********** 2026-04-05 05:36:36.408916 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:36:36.408936 | orchestrator | 2026-04-05 05:36:36.408955 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:36:36.408973 | orchestrator | Sunday 05 April 2026 05:36:27 +0000 (0:00:01.562) 0:23:04.467 ********** 2026-04-05 05:36:36.408990 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:36:36.409008 | orchestrator | 2026-04-05 05:36:36.409025 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:36:36.409043 | orchestrator | Sunday 05 April 2026 05:36:28 +0000 (0:00:01.140) 0:23:05.608 ********** 2026-04-05 05:36:36.409060 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:36:36.409079 | orchestrator | 2026-04-05 05:36:36.409098 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:36:36.409130 | orchestrator | Sunday 05 April 2026 05:36:30 +0000 (0:00:01.544) 0:23:07.153 ********** 2026-04-05 05:36:36.409150 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:36.409168 | orchestrator | 2026-04-05 05:36:36.409186 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:36:36.409205 | orchestrator | Sunday 05 April 2026 05:36:31 +0000 (0:00:01.106) 0:23:08.259 ********** 2026-04-05 05:36:36.409224 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:36.409244 | orchestrator | 2026-04-05 05:36:36.409263 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:36:36.409283 | orchestrator | Sunday 05 April 2026 05:36:33 +0000 (0:00:01.783) 0:23:10.042 ********** 2026-04-05 05:36:36.409301 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:36.409319 | orchestrator | 2026-04-05 05:36:36.409336 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:36:36.409347 | orchestrator | Sunday 05 April 2026 05:36:34 +0000 (0:00:01.195) 0:23:11.238 ********** 2026-04-05 05:36:36.409357 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:36:36.409409 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 05:36:36.409421 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 05:36:36.409432 | orchestrator | 2026-04-05 05:36:36.409443 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:36:36.409454 | orchestrator | Sunday 05 April 2026 05:36:36 +0000 (0:00:01.687) 0:23:12.926 ********** 2026-04-05 05:36:36.409465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 05:36:36.409476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 05:36:36.409487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 05:36:36.409498 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:36:36.409509 | orchestrator | 2026-04-05 05:36:36.409532 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:37:20.708035 | orchestrator | Sunday 05 April 2026 05:36:37 +0000 (0:00:01.191) 0:23:14.118 ********** 2026-04-05 05:37:20.708208 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.708227 | orchestrator | 2026-04-05 05:37:20.708240 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:37:20.708252 | orchestrator | Sunday 05 April 2026 05:36:38 +0000 (0:00:01.143) 0:23:15.261 ********** 2026-04-05 05:37:20.708263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:37:20.708274 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:37:20.708286 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:37:20.708297 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:37:20.708307 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:37:20.708318 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:37:20.708344 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:37:20.708355 | orchestrator | 2026-04-05 05:37:20.708366 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:37:20.708377 | orchestrator | Sunday 05 April 2026 05:36:40 +0000 (0:00:01.869) 0:23:17.131 ********** 2026-04-05 05:37:20.708388 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:37:20.708398 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:37:20.708409 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:37:20.708420 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:37:20.708430 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:37:20.708464 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:37:20.708476 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:37:20.708486 | orchestrator | 2026-04-05 05:37:20.708497 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:37:20.708508 | orchestrator | Sunday 05 April 2026 05:36:43 +0000 (0:00:02.637) 0:23:19.769 ********** 2026-04-05 05:37:20.708518 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-05 05:37:20.708530 | orchestrator | 2026-04-05 05:37:20.708541 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:37:20.708551 | orchestrator | Sunday 05 April 2026 05:36:44 +0000 (0:00:01.108) 0:23:20.877 ********** 2026-04-05 05:37:20.708563 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-05 05:37:20.708576 | orchestrator | 2026-04-05 05:37:20.708589 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:37:20.708602 | orchestrator | Sunday 05 April 2026 05:36:45 +0000 (0:00:01.162) 0:23:22.039 ********** 2026-04-05 05:37:20.708614 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.708626 | orchestrator | 2026-04-05 05:37:20.708639 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:37:20.708651 | orchestrator | Sunday 05 April 2026 05:36:46 +0000 (0:00:01.548) 0:23:23.587 ********** 2026-04-05 05:37:20.708664 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.708676 | orchestrator | 2026-04-05 05:37:20.708689 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:37:20.708702 | orchestrator | Sunday 05 April 2026 05:36:48 +0000 (0:00:01.209) 0:23:24.797 ********** 2026-04-05 05:37:20.708714 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.708726 | orchestrator | 2026-04-05 05:37:20.708739 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:37:20.708752 | orchestrator | Sunday 05 April 2026 05:36:49 +0000 (0:00:01.182) 0:23:25.980 ********** 2026-04-05 05:37:20.708764 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.708777 | orchestrator | 2026-04-05 05:37:20.708789 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:37:20.708802 | orchestrator | Sunday 05 April 2026 05:36:50 +0000 (0:00:01.157) 0:23:27.138 ********** 2026-04-05 05:37:20.708815 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.708827 | orchestrator | 2026-04-05 05:37:20.708841 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:37:20.708854 | orchestrator | Sunday 05 April 2026 05:36:51 +0000 (0:00:01.568) 0:23:28.707 ********** 2026-04-05 05:37:20.708866 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.708879 | orchestrator | 2026-04-05 05:37:20.708891 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:37:20.708904 | orchestrator | Sunday 05 April 2026 05:36:53 +0000 (0:00:01.162) 0:23:29.869 ********** 2026-04-05 05:37:20.708917 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.708929 | orchestrator | 2026-04-05 05:37:20.708940 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:37:20.708951 | orchestrator | Sunday 05 April 2026 05:36:54 +0000 (0:00:01.107) 0:23:30.977 ********** 2026-04-05 05:37:20.708961 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.708972 | orchestrator | 2026-04-05 05:37:20.708983 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:37:20.708993 | orchestrator | Sunday 05 April 2026 05:36:55 +0000 (0:00:01.529) 0:23:32.506 ********** 2026-04-05 05:37:20.709004 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.709015 | orchestrator | 2026-04-05 05:37:20.709025 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:37:20.709054 | orchestrator | Sunday 05 April 2026 05:36:57 +0000 (0:00:01.550) 0:23:34.057 ********** 2026-04-05 05:37:20.709073 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709084 | orchestrator | 2026-04-05 05:37:20.709095 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:37:20.709106 | orchestrator | Sunday 05 April 2026 05:36:58 +0000 (0:00:01.177) 0:23:35.235 ********** 2026-04-05 05:37:20.709116 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.709145 | orchestrator | 2026-04-05 05:37:20.709157 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:37:20.709168 | orchestrator | Sunday 05 April 2026 05:36:59 +0000 (0:00:01.153) 0:23:36.388 ********** 2026-04-05 05:37:20.709178 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709189 | orchestrator | 2026-04-05 05:37:20.709199 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:37:20.709210 | orchestrator | Sunday 05 April 2026 05:37:00 +0000 (0:00:01.152) 0:23:37.541 ********** 2026-04-05 05:37:20.709221 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709231 | orchestrator | 2026-04-05 05:37:20.709242 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:37:20.709258 | orchestrator | Sunday 05 April 2026 05:37:01 +0000 (0:00:01.110) 0:23:38.651 ********** 2026-04-05 05:37:20.709269 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709280 | orchestrator | 2026-04-05 05:37:20.709290 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:37:20.709301 | orchestrator | Sunday 05 April 2026 05:37:03 +0000 (0:00:01.124) 0:23:39.775 ********** 2026-04-05 05:37:20.709312 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709322 | orchestrator | 2026-04-05 05:37:20.709333 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:37:20.709343 | orchestrator | Sunday 05 April 2026 05:37:04 +0000 (0:00:01.169) 0:23:40.945 ********** 2026-04-05 05:37:20.709354 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709365 | orchestrator | 2026-04-05 05:37:20.709376 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:37:20.709386 | orchestrator | Sunday 05 April 2026 05:37:05 +0000 (0:00:01.244) 0:23:42.189 ********** 2026-04-05 05:37:20.709397 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.709408 | orchestrator | 2026-04-05 05:37:20.709418 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:37:20.709429 | orchestrator | Sunday 05 April 2026 05:37:06 +0000 (0:00:01.145) 0:23:43.334 ********** 2026-04-05 05:37:20.709440 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.709450 | orchestrator | 2026-04-05 05:37:20.709461 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:37:20.709472 | orchestrator | Sunday 05 April 2026 05:37:07 +0000 (0:00:01.203) 0:23:44.538 ********** 2026-04-05 05:37:20.709482 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:37:20.709493 | orchestrator | 2026-04-05 05:37:20.709504 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:37:20.709514 | orchestrator | Sunday 05 April 2026 05:37:09 +0000 (0:00:01.189) 0:23:45.727 ********** 2026-04-05 05:37:20.709525 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709535 | orchestrator | 2026-04-05 05:37:20.709546 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:37:20.709557 | orchestrator | Sunday 05 April 2026 05:37:10 +0000 (0:00:01.206) 0:23:46.934 ********** 2026-04-05 05:37:20.709568 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709578 | orchestrator | 2026-04-05 05:37:20.709589 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:37:20.709600 | orchestrator | Sunday 05 April 2026 05:37:11 +0000 (0:00:01.151) 0:23:48.086 ********** 2026-04-05 05:37:20.709610 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709621 | orchestrator | 2026-04-05 05:37:20.709631 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:37:20.709642 | orchestrator | Sunday 05 April 2026 05:37:12 +0000 (0:00:01.135) 0:23:49.221 ********** 2026-04-05 05:37:20.709660 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709670 | orchestrator | 2026-04-05 05:37:20.709681 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:37:20.709692 | orchestrator | Sunday 05 April 2026 05:37:13 +0000 (0:00:01.129) 0:23:50.350 ********** 2026-04-05 05:37:20.709702 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709713 | orchestrator | 2026-04-05 05:37:20.709724 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:37:20.709734 | orchestrator | Sunday 05 April 2026 05:37:14 +0000 (0:00:01.126) 0:23:51.477 ********** 2026-04-05 05:37:20.709745 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709755 | orchestrator | 2026-04-05 05:37:20.709766 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:37:20.709777 | orchestrator | Sunday 05 April 2026 05:37:15 +0000 (0:00:01.126) 0:23:52.603 ********** 2026-04-05 05:37:20.709787 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709798 | orchestrator | 2026-04-05 05:37:20.709809 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:37:20.709819 | orchestrator | Sunday 05 April 2026 05:37:17 +0000 (0:00:01.128) 0:23:53.731 ********** 2026-04-05 05:37:20.709830 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709840 | orchestrator | 2026-04-05 05:37:20.709851 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:37:20.709862 | orchestrator | Sunday 05 April 2026 05:37:18 +0000 (0:00:01.161) 0:23:54.893 ********** 2026-04-05 05:37:20.709872 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709883 | orchestrator | 2026-04-05 05:37:20.709893 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:37:20.709905 | orchestrator | Sunday 05 April 2026 05:37:19 +0000 (0:00:01.177) 0:23:56.070 ********** 2026-04-05 05:37:20.709924 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.709943 | orchestrator | 2026-04-05 05:37:20.709959 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:37:20.709977 | orchestrator | Sunday 05 April 2026 05:37:20 +0000 (0:00:01.172) 0:23:57.242 ********** 2026-04-05 05:37:20.709996 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:37:20.710013 | orchestrator | 2026-04-05 05:37:20.710112 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:38:09.992820 | orchestrator | Sunday 05 April 2026 05:37:21 +0000 (0:00:01.094) 0:23:58.337 ********** 2026-04-05 05:38:09.992958 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.992970 | orchestrator | 2026-04-05 05:38:09.992978 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:38:09.992985 | orchestrator | Sunday 05 April 2026 05:37:22 +0000 (0:00:01.127) 0:23:59.465 ********** 2026-04-05 05:38:09.992991 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:38:09.992998 | orchestrator | 2026-04-05 05:38:09.993005 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:38:09.993011 | orchestrator | Sunday 05 April 2026 05:37:24 +0000 (0:00:01.952) 0:24:01.418 ********** 2026-04-05 05:38:09.993018 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:38:09.993024 | orchestrator | 2026-04-05 05:38:09.993030 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:38:09.993036 | orchestrator | Sunday 05 April 2026 05:37:27 +0000 (0:00:02.434) 0:24:03.852 ********** 2026-04-05 05:38:09.993055 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-05 05:38:09.993063 | orchestrator | 2026-04-05 05:38:09.993069 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:38:09.993075 | orchestrator | Sunday 05 April 2026 05:37:28 +0000 (0:00:01.176) 0:24:05.029 ********** 2026-04-05 05:38:09.993081 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993087 | orchestrator | 2026-04-05 05:38:09.993094 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:38:09.993116 | orchestrator | Sunday 05 April 2026 05:37:29 +0000 (0:00:01.173) 0:24:06.203 ********** 2026-04-05 05:38:09.993123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993129 | orchestrator | 2026-04-05 05:38:09.993135 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:38:09.993141 | orchestrator | Sunday 05 April 2026 05:37:30 +0000 (0:00:01.149) 0:24:07.353 ********** 2026-04-05 05:38:09.993147 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:38:09.993153 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:38:09.993160 | orchestrator | 2026-04-05 05:38:09.993166 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:38:09.993172 | orchestrator | Sunday 05 April 2026 05:37:32 +0000 (0:00:01.816) 0:24:09.169 ********** 2026-04-05 05:38:09.993178 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:38:09.993184 | orchestrator | 2026-04-05 05:38:09.993190 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:38:09.993197 | orchestrator | Sunday 05 April 2026 05:37:33 +0000 (0:00:01.487) 0:24:10.657 ********** 2026-04-05 05:38:09.993203 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993209 | orchestrator | 2026-04-05 05:38:09.993215 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:38:09.993221 | orchestrator | Sunday 05 April 2026 05:37:35 +0000 (0:00:01.224) 0:24:11.881 ********** 2026-04-05 05:38:09.993227 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993233 | orchestrator | 2026-04-05 05:38:09.993239 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:38:09.993245 | orchestrator | Sunday 05 April 2026 05:37:36 +0000 (0:00:01.118) 0:24:13.000 ********** 2026-04-05 05:38:09.993251 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993257 | orchestrator | 2026-04-05 05:38:09.993263 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:38:09.993270 | orchestrator | Sunday 05 April 2026 05:37:37 +0000 (0:00:01.115) 0:24:14.115 ********** 2026-04-05 05:38:09.993276 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-05 05:38:09.993282 | orchestrator | 2026-04-05 05:38:09.993288 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:38:09.993294 | orchestrator | Sunday 05 April 2026 05:37:38 +0000 (0:00:01.139) 0:24:15.254 ********** 2026-04-05 05:38:09.993300 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:38:09.993306 | orchestrator | 2026-04-05 05:38:09.993313 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:38:09.993319 | orchestrator | Sunday 05 April 2026 05:37:40 +0000 (0:00:01.778) 0:24:17.033 ********** 2026-04-05 05:38:09.993325 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:38:09.993331 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:38:09.993337 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:38:09.993343 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993349 | orchestrator | 2026-04-05 05:38:09.993355 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:38:09.993361 | orchestrator | Sunday 05 April 2026 05:37:41 +0000 (0:00:01.126) 0:24:18.159 ********** 2026-04-05 05:38:09.993367 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993373 | orchestrator | 2026-04-05 05:38:09.993381 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:38:09.993388 | orchestrator | Sunday 05 April 2026 05:37:42 +0000 (0:00:01.147) 0:24:19.307 ********** 2026-04-05 05:38:09.993395 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993402 | orchestrator | 2026-04-05 05:38:09.993410 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:38:09.993417 | orchestrator | Sunday 05 April 2026 05:37:43 +0000 (0:00:01.193) 0:24:20.501 ********** 2026-04-05 05:38:09.993429 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993437 | orchestrator | 2026-04-05 05:38:09.993444 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:38:09.993451 | orchestrator | Sunday 05 April 2026 05:37:44 +0000 (0:00:01.158) 0:24:21.660 ********** 2026-04-05 05:38:09.993458 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993465 | orchestrator | 2026-04-05 05:38:09.993486 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:38:09.993493 | orchestrator | Sunday 05 April 2026 05:37:46 +0000 (0:00:01.186) 0:24:22.847 ********** 2026-04-05 05:38:09.993501 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993508 | orchestrator | 2026-04-05 05:38:09.993516 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:38:09.993523 | orchestrator | Sunday 05 April 2026 05:37:47 +0000 (0:00:01.166) 0:24:24.013 ********** 2026-04-05 05:38:09.993530 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:38:09.993537 | orchestrator | 2026-04-05 05:38:09.993545 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:38:09.993553 | orchestrator | Sunday 05 April 2026 05:37:49 +0000 (0:00:02.559) 0:24:26.573 ********** 2026-04-05 05:38:09.993560 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:38:09.993567 | orchestrator | 2026-04-05 05:38:09.993575 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:38:09.993582 | orchestrator | Sunday 05 April 2026 05:37:51 +0000 (0:00:01.294) 0:24:27.867 ********** 2026-04-05 05:38:09.993592 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-05 05:38:09.993600 | orchestrator | 2026-04-05 05:38:09.993607 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:38:09.993615 | orchestrator | Sunday 05 April 2026 05:37:52 +0000 (0:00:01.163) 0:24:29.030 ********** 2026-04-05 05:38:09.993622 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993628 | orchestrator | 2026-04-05 05:38:09.993634 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:38:09.993640 | orchestrator | Sunday 05 April 2026 05:37:53 +0000 (0:00:01.170) 0:24:30.201 ********** 2026-04-05 05:38:09.993646 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993653 | orchestrator | 2026-04-05 05:38:09.993659 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:38:09.993665 | orchestrator | Sunday 05 April 2026 05:37:54 +0000 (0:00:01.141) 0:24:31.342 ********** 2026-04-05 05:38:09.993671 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993677 | orchestrator | 2026-04-05 05:38:09.993683 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:38:09.993689 | orchestrator | Sunday 05 April 2026 05:37:55 +0000 (0:00:01.218) 0:24:32.560 ********** 2026-04-05 05:38:09.993695 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993701 | orchestrator | 2026-04-05 05:38:09.993707 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:38:09.993713 | orchestrator | Sunday 05 April 2026 05:37:57 +0000 (0:00:01.254) 0:24:33.816 ********** 2026-04-05 05:38:09.993719 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993725 | orchestrator | 2026-04-05 05:38:09.993731 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:38:09.993737 | orchestrator | Sunday 05 April 2026 05:37:58 +0000 (0:00:01.162) 0:24:34.978 ********** 2026-04-05 05:38:09.993743 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993749 | orchestrator | 2026-04-05 05:38:09.993755 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:38:09.993761 | orchestrator | Sunday 05 April 2026 05:37:59 +0000 (0:00:01.134) 0:24:36.113 ********** 2026-04-05 05:38:09.993767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993773 | orchestrator | 2026-04-05 05:38:09.993780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:38:09.993790 | orchestrator | Sunday 05 April 2026 05:38:00 +0000 (0:00:01.202) 0:24:37.315 ********** 2026-04-05 05:38:09.993796 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:38:09.993802 | orchestrator | 2026-04-05 05:38:09.993808 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:38:09.993814 | orchestrator | Sunday 05 April 2026 05:38:01 +0000 (0:00:01.141) 0:24:38.457 ********** 2026-04-05 05:38:09.993820 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:38:09.993826 | orchestrator | 2026-04-05 05:38:09.993832 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:38:09.993838 | orchestrator | Sunday 05 April 2026 05:38:02 +0000 (0:00:01.181) 0:24:39.638 ********** 2026-04-05 05:38:09.993844 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-05 05:38:09.993851 | orchestrator | 2026-04-05 05:38:09.993857 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:38:09.993896 | orchestrator | Sunday 05 April 2026 05:38:04 +0000 (0:00:01.372) 0:24:41.010 ********** 2026-04-05 05:38:09.993903 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-05 05:38:09.993909 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-05 05:38:09.993916 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-05 05:38:09.993922 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-05 05:38:09.993928 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-05 05:38:09.993934 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-05 05:38:09.993940 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-05 05:38:09.993946 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:38:09.993952 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:38:09.993958 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:38:09.993964 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:38:09.993970 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:38:09.993976 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:38:09.993982 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:38:09.993988 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-05 05:38:09.993994 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-05 05:38:09.994000 | orchestrator | 2026-04-05 05:38:09.994010 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:39:02.696688 | orchestrator | Sunday 05 April 2026 05:38:10 +0000 (0:00:06.703) 0:24:47.713 ********** 2026-04-05 05:39:02.696809 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.696825 | orchestrator | 2026-04-05 05:39:02.696838 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:39:02.696850 | orchestrator | Sunday 05 April 2026 05:38:12 +0000 (0:00:01.122) 0:24:48.836 ********** 2026-04-05 05:39:02.696861 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.696871 | orchestrator | 2026-04-05 05:39:02.696886 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:39:02.696905 | orchestrator | Sunday 05 April 2026 05:38:13 +0000 (0:00:01.114) 0:24:49.950 ********** 2026-04-05 05:39:02.696916 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.696927 | orchestrator | 2026-04-05 05:39:02.696938 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:39:02.696949 | orchestrator | Sunday 05 April 2026 05:38:14 +0000 (0:00:01.163) 0:24:51.114 ********** 2026-04-05 05:39:02.696979 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.696991 | orchestrator | 2026-04-05 05:39:02.697003 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:39:02.697022 | orchestrator | Sunday 05 April 2026 05:38:15 +0000 (0:00:01.102) 0:24:52.217 ********** 2026-04-05 05:39:02.697059 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697070 | orchestrator | 2026-04-05 05:39:02.697081 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:39:02.697092 | orchestrator | Sunday 05 April 2026 05:38:16 +0000 (0:00:01.097) 0:24:53.314 ********** 2026-04-05 05:39:02.697103 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697114 | orchestrator | 2026-04-05 05:39:02.697125 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:39:02.697137 | orchestrator | Sunday 05 April 2026 05:38:17 +0000 (0:00:01.127) 0:24:54.442 ********** 2026-04-05 05:39:02.697148 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697161 | orchestrator | 2026-04-05 05:39:02.697174 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:39:02.697187 | orchestrator | Sunday 05 April 2026 05:38:18 +0000 (0:00:01.098) 0:24:55.540 ********** 2026-04-05 05:39:02.697199 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697212 | orchestrator | 2026-04-05 05:39:02.697225 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:39:02.697239 | orchestrator | Sunday 05 April 2026 05:38:19 +0000 (0:00:01.114) 0:24:56.654 ********** 2026-04-05 05:39:02.697252 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697264 | orchestrator | 2026-04-05 05:39:02.697275 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:39:02.697286 | orchestrator | Sunday 05 April 2026 05:38:21 +0000 (0:00:01.169) 0:24:57.824 ********** 2026-04-05 05:39:02.697297 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697307 | orchestrator | 2026-04-05 05:39:02.697318 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:39:02.697329 | orchestrator | Sunday 05 April 2026 05:38:22 +0000 (0:00:01.142) 0:24:58.966 ********** 2026-04-05 05:39:02.697340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697351 | orchestrator | 2026-04-05 05:39:02.697361 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:39:02.697372 | orchestrator | Sunday 05 April 2026 05:38:23 +0000 (0:00:01.174) 0:25:00.141 ********** 2026-04-05 05:39:02.697383 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697393 | orchestrator | 2026-04-05 05:39:02.697404 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:39:02.697415 | orchestrator | Sunday 05 April 2026 05:38:24 +0000 (0:00:01.219) 0:25:01.360 ********** 2026-04-05 05:39:02.697425 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697436 | orchestrator | 2026-04-05 05:39:02.697447 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:39:02.697457 | orchestrator | Sunday 05 April 2026 05:38:25 +0000 (0:00:01.226) 0:25:02.586 ********** 2026-04-05 05:39:02.697468 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697479 | orchestrator | 2026-04-05 05:39:02.697489 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:39:02.697500 | orchestrator | Sunday 05 April 2026 05:38:27 +0000 (0:00:01.168) 0:25:03.754 ********** 2026-04-05 05:39:02.697511 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697521 | orchestrator | 2026-04-05 05:39:02.697532 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:39:02.697543 | orchestrator | Sunday 05 April 2026 05:38:28 +0000 (0:00:01.264) 0:25:05.019 ********** 2026-04-05 05:39:02.697553 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697565 | orchestrator | 2026-04-05 05:39:02.697575 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:39:02.697586 | orchestrator | Sunday 05 April 2026 05:38:29 +0000 (0:00:01.140) 0:25:06.160 ********** 2026-04-05 05:39:02.697597 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697608 | orchestrator | 2026-04-05 05:39:02.697645 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:39:02.697667 | orchestrator | Sunday 05 April 2026 05:38:30 +0000 (0:00:01.168) 0:25:07.328 ********** 2026-04-05 05:39:02.697678 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697689 | orchestrator | 2026-04-05 05:39:02.697700 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:39:02.697711 | orchestrator | Sunday 05 April 2026 05:38:31 +0000 (0:00:01.125) 0:25:08.454 ********** 2026-04-05 05:39:02.697722 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697733 | orchestrator | 2026-04-05 05:39:02.697744 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:39:02.697755 | orchestrator | Sunday 05 April 2026 05:38:32 +0000 (0:00:01.169) 0:25:09.624 ********** 2026-04-05 05:39:02.697766 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697777 | orchestrator | 2026-04-05 05:39:02.697806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:39:02.697819 | orchestrator | Sunday 05 April 2026 05:38:34 +0000 (0:00:01.150) 0:25:10.774 ********** 2026-04-05 05:39:02.697829 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697840 | orchestrator | 2026-04-05 05:39:02.697851 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:39:02.697862 | orchestrator | Sunday 05 April 2026 05:38:35 +0000 (0:00:01.125) 0:25:11.900 ********** 2026-04-05 05:39:02.697873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:39:02.697884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:39:02.697895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:39:02.697906 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.697916 | orchestrator | 2026-04-05 05:39:02.697927 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:39:02.697944 | orchestrator | Sunday 05 April 2026 05:38:36 +0000 (0:00:01.402) 0:25:13.302 ********** 2026-04-05 05:39:02.697959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:39:02.697978 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:39:02.697996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:39:02.698014 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.698115 | orchestrator | 2026-04-05 05:39:02.698127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:39:02.698138 | orchestrator | Sunday 05 April 2026 05:38:38 +0000 (0:00:01.867) 0:25:15.170 ********** 2026-04-05 05:39:02.698149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 05:39:02.698170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-05 05:39:02.698181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 05:39:02.698192 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.698203 | orchestrator | 2026-04-05 05:39:02.698214 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:39:02.698224 | orchestrator | Sunday 05 April 2026 05:38:40 +0000 (0:00:01.811) 0:25:16.982 ********** 2026-04-05 05:39:02.698235 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.698246 | orchestrator | 2026-04-05 05:39:02.698257 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:39:02.698267 | orchestrator | Sunday 05 April 2026 05:38:41 +0000 (0:00:01.263) 0:25:18.245 ********** 2026-04-05 05:39:02.698279 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-05 05:39:02.698289 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.698300 | orchestrator | 2026-04-05 05:39:02.698311 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:39:02.698322 | orchestrator | Sunday 05 April 2026 05:38:42 +0000 (0:00:01.256) 0:25:19.502 ********** 2026-04-05 05:39:02.698333 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:39:02.698353 | orchestrator | 2026-04-05 05:39:02.698364 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:39:02.698375 | orchestrator | Sunday 05 April 2026 05:38:44 +0000 (0:00:01.754) 0:25:21.256 ********** 2026-04-05 05:39:02.698386 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 05:39:02.698396 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:39:02.698408 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:39:02.698418 | orchestrator | 2026-04-05 05:39:02.698429 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 05:39:02.698440 | orchestrator | Sunday 05 April 2026 05:38:46 +0000 (0:00:01.631) 0:25:22.887 ********** 2026-04-05 05:39:02.698450 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-04-05 05:39:02.698461 | orchestrator | 2026-04-05 05:39:02.698472 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-05 05:39:02.698482 | orchestrator | Sunday 05 April 2026 05:38:47 +0000 (0:00:01.466) 0:25:24.354 ********** 2026-04-05 05:39:02.698493 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:39:02.698504 | orchestrator | 2026-04-05 05:39:02.698515 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-05 05:39:02.698525 | orchestrator | Sunday 05 April 2026 05:38:49 +0000 (0:00:01.526) 0:25:25.881 ********** 2026-04-05 05:39:02.698536 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:02.698547 | orchestrator | 2026-04-05 05:39:02.698557 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-05 05:39:02.698568 | orchestrator | Sunday 05 April 2026 05:38:50 +0000 (0:00:01.201) 0:25:27.082 ********** 2026-04-05 05:39:02.698579 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 05:39:02.698589 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 05:39:02.698600 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 05:39:02.698611 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-05 05:39:02.698648 | orchestrator | 2026-04-05 05:39:02.698660 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-05 05:39:02.698671 | orchestrator | Sunday 05 April 2026 05:38:57 +0000 (0:00:07.496) 0:25:34.579 ********** 2026-04-05 05:39:02.698681 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:39:02.698692 | orchestrator | 2026-04-05 05:39:02.698703 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-05 05:39:02.698714 | orchestrator | Sunday 05 April 2026 05:38:59 +0000 (0:00:01.162) 0:25:35.741 ********** 2026-04-05 05:39:02.698725 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 05:39:02.698736 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 05:39:02.698747 | orchestrator | 2026-04-05 05:39:02.698757 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-05 05:39:02.698768 | orchestrator | Sunday 05 April 2026 05:39:02 +0000 (0:00:03.575) 0:25:39.317 ********** 2026-04-05 05:39:02.698789 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 05:39:49.821012 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 05:39:49.821127 | orchestrator | 2026-04-05 05:39:49.821143 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-05 05:39:49.821156 | orchestrator | Sunday 05 April 2026 05:39:04 +0000 (0:00:02.093) 0:25:41.410 ********** 2026-04-05 05:39:49.821168 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:39:49.821179 | orchestrator | 2026-04-05 05:39:49.821190 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-05 05:39:49.821201 | orchestrator | Sunday 05 April 2026 05:39:06 +0000 (0:00:01.519) 0:25:42.929 ********** 2026-04-05 05:39:49.821212 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:49.821223 | orchestrator | 2026-04-05 05:39:49.821233 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 05:39:49.821244 | orchestrator | Sunday 05 April 2026 05:39:07 +0000 (0:00:01.132) 0:25:44.062 ********** 2026-04-05 05:39:49.821278 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:49.821290 | orchestrator | 2026-04-05 05:39:49.821314 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 05:39:49.821326 | orchestrator | Sunday 05 April 2026 05:39:08 +0000 (0:00:01.105) 0:25:45.168 ********** 2026-04-05 05:39:49.821337 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-04-05 05:39:49.821348 | orchestrator | 2026-04-05 05:39:49.821359 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-05 05:39:49.821370 | orchestrator | Sunday 05 April 2026 05:39:09 +0000 (0:00:01.448) 0:25:46.616 ********** 2026-04-05 05:39:49.821381 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:49.821391 | orchestrator | 2026-04-05 05:39:49.821463 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-05 05:39:49.821478 | orchestrator | Sunday 05 April 2026 05:39:11 +0000 (0:00:01.198) 0:25:47.815 ********** 2026-04-05 05:39:49.821489 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:49.821499 | orchestrator | 2026-04-05 05:39:49.821510 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-05 05:39:49.821520 | orchestrator | Sunday 05 April 2026 05:39:12 +0000 (0:00:01.186) 0:25:49.001 ********** 2026-04-05 05:39:49.821531 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-04-05 05:39:49.821542 | orchestrator | 2026-04-05 05:39:49.821554 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-05 05:39:49.821567 | orchestrator | Sunday 05 April 2026 05:39:13 +0000 (0:00:01.451) 0:25:50.453 ********** 2026-04-05 05:39:49.821579 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:39:49.821591 | orchestrator | 2026-04-05 05:39:49.821603 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-05 05:39:49.821616 | orchestrator | Sunday 05 April 2026 05:39:15 +0000 (0:00:02.047) 0:25:52.500 ********** 2026-04-05 05:39:49.821628 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:39:49.821641 | orchestrator | 2026-04-05 05:39:49.821654 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-05 05:39:49.821666 | orchestrator | Sunday 05 April 2026 05:39:17 +0000 (0:00:01.997) 0:25:54.498 ********** 2026-04-05 05:39:49.821678 | orchestrator | ok: [testbed-node-0] 2026-04-05 05:39:49.821691 | orchestrator | 2026-04-05 05:39:49.821704 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-05 05:39:49.821716 | orchestrator | Sunday 05 April 2026 05:39:20 +0000 (0:00:02.429) 0:25:56.928 ********** 2026-04-05 05:39:49.821729 | orchestrator | changed: [testbed-node-0] 2026-04-05 05:39:49.821742 | orchestrator | 2026-04-05 05:39:49.821754 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 05:39:49.821767 | orchestrator | Sunday 05 April 2026 05:39:24 +0000 (0:00:03.919) 0:26:00.847 ********** 2026-04-05 05:39:49.821779 | orchestrator | skipping: [testbed-node-0] 2026-04-05 05:39:49.821792 | orchestrator | 2026-04-05 05:39:49.821804 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-05 05:39:49.821816 | orchestrator | 2026-04-05 05:39:49.821828 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-05 05:39:49.821841 | orchestrator | Sunday 05 April 2026 05:39:25 +0000 (0:00:01.017) 0:26:01.864 ********** 2026-04-05 05:39:49.821853 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:39:49.821866 | orchestrator | 2026-04-05 05:39:49.821878 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-05 05:39:49.821890 | orchestrator | Sunday 05 April 2026 05:39:27 +0000 (0:00:02.664) 0:26:04.529 ********** 2026-04-05 05:39:49.821903 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:39:49.821916 | orchestrator | 2026-04-05 05:39:49.821928 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:39:49.821938 | orchestrator | Sunday 05 April 2026 05:39:29 +0000 (0:00:02.154) 0:26:06.683 ********** 2026-04-05 05:39:49.821949 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-05 05:39:49.821969 | orchestrator | 2026-04-05 05:39:49.821979 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:39:49.821990 | orchestrator | Sunday 05 April 2026 05:39:31 +0000 (0:00:01.158) 0:26:07.842 ********** 2026-04-05 05:39:49.822001 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822012 | orchestrator | 2026-04-05 05:39:49.822083 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:39:49.822094 | orchestrator | Sunday 05 April 2026 05:39:32 +0000 (0:00:01.519) 0:26:09.362 ********** 2026-04-05 05:39:49.822105 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822116 | orchestrator | 2026-04-05 05:39:49.822126 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:39:49.822137 | orchestrator | Sunday 05 April 2026 05:39:33 +0000 (0:00:01.140) 0:26:10.502 ********** 2026-04-05 05:39:49.822148 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822158 | orchestrator | 2026-04-05 05:39:49.822170 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:39:49.822181 | orchestrator | Sunday 05 April 2026 05:39:35 +0000 (0:00:01.499) 0:26:12.001 ********** 2026-04-05 05:39:49.822192 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822203 | orchestrator | 2026-04-05 05:39:49.822232 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:39:49.822243 | orchestrator | Sunday 05 April 2026 05:39:36 +0000 (0:00:01.154) 0:26:13.156 ********** 2026-04-05 05:39:49.822254 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822265 | orchestrator | 2026-04-05 05:39:49.822276 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:39:49.822286 | orchestrator | Sunday 05 April 2026 05:39:37 +0000 (0:00:01.108) 0:26:14.265 ********** 2026-04-05 05:39:49.822297 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822307 | orchestrator | 2026-04-05 05:39:49.822318 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:39:49.822329 | orchestrator | Sunday 05 April 2026 05:39:38 +0000 (0:00:01.254) 0:26:15.519 ********** 2026-04-05 05:39:49.822339 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:39:49.822350 | orchestrator | 2026-04-05 05:39:49.822360 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:39:49.822377 | orchestrator | Sunday 05 April 2026 05:39:39 +0000 (0:00:01.143) 0:26:16.662 ********** 2026-04-05 05:39:49.822388 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822437 | orchestrator | 2026-04-05 05:39:49.822451 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:39:49.822462 | orchestrator | Sunday 05 April 2026 05:39:41 +0000 (0:00:01.151) 0:26:17.814 ********** 2026-04-05 05:39:49.822473 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:39:49.822483 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:39:49.822494 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:39:49.822505 | orchestrator | 2026-04-05 05:39:49.822515 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:39:49.822526 | orchestrator | Sunday 05 April 2026 05:39:42 +0000 (0:00:01.713) 0:26:19.528 ********** 2026-04-05 05:39:49.822536 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:39:49.822547 | orchestrator | 2026-04-05 05:39:49.822557 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:39:49.822568 | orchestrator | Sunday 05 April 2026 05:39:44 +0000 (0:00:01.221) 0:26:20.749 ********** 2026-04-05 05:39:49.822579 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:39:49.822590 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:39:49.822600 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:39:49.822611 | orchestrator | 2026-04-05 05:39:49.822630 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:39:49.822659 | orchestrator | Sunday 05 April 2026 05:39:46 +0000 (0:00:02.674) 0:26:23.424 ********** 2026-04-05 05:39:49.822677 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:39:49.822696 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:39:49.822715 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:39:49.822734 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:39:49.822753 | orchestrator | 2026-04-05 05:39:49.822771 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:39:49.822792 | orchestrator | Sunday 05 April 2026 05:39:48 +0000 (0:00:01.415) 0:26:24.840 ********** 2026-04-05 05:39:49.822813 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:39:49.822837 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:39:49.822858 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:39:49.822880 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:39:49.822901 | orchestrator | 2026-04-05 05:39:49.822922 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:39:49.822943 | orchestrator | Sunday 05 April 2026 05:39:49 +0000 (0:00:01.602) 0:26:26.443 ********** 2026-04-05 05:39:49.822966 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:39:49.822989 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:39:49.823027 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:09.716604 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.716678 | orchestrator | 2026-04-05 05:40:09.716688 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:40:09.716697 | orchestrator | Sunday 05 April 2026 05:39:50 +0000 (0:00:01.180) 0:26:27.624 ********** 2026-04-05 05:40:09.716721 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:39:44.524932', 'end': '2026-04-05 05:39:44.557147', 'delta': '0:00:00.032215', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:40:09.716754 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:39:45.033413', 'end': '2026-04-05 05:39:45.073900', 'delta': '0:00:00.040487', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:40:09.716766 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:39:45.554524', 'end': '2026-04-05 05:39:45.599270', 'delta': '0:00:00.044746', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:40:09.716775 | orchestrator | 2026-04-05 05:40:09.716786 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:40:09.716795 | orchestrator | Sunday 05 April 2026 05:39:52 +0000 (0:00:01.192) 0:26:28.816 ********** 2026-04-05 05:40:09.716805 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:09.716816 | orchestrator | 2026-04-05 05:40:09.716825 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:40:09.716834 | orchestrator | Sunday 05 April 2026 05:39:53 +0000 (0:00:01.285) 0:26:30.102 ********** 2026-04-05 05:40:09.716845 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.716855 | orchestrator | 2026-04-05 05:40:09.716865 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:40:09.716874 | orchestrator | Sunday 05 April 2026 05:39:54 +0000 (0:00:01.240) 0:26:31.343 ********** 2026-04-05 05:40:09.716884 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:09.716931 | orchestrator | 2026-04-05 05:40:09.716946 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:40:09.716956 | orchestrator | Sunday 05 April 2026 05:39:55 +0000 (0:00:01.151) 0:26:32.495 ********** 2026-04-05 05:40:09.716965 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:40:09.716971 | orchestrator | 2026-04-05 05:40:09.716977 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:40:09.716983 | orchestrator | Sunday 05 April 2026 05:39:58 +0000 (0:00:02.440) 0:26:34.935 ********** 2026-04-05 05:40:09.716988 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:09.716994 | orchestrator | 2026-04-05 05:40:09.717000 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:40:09.717006 | orchestrator | Sunday 05 April 2026 05:39:59 +0000 (0:00:01.183) 0:26:36.118 ********** 2026-04-05 05:40:09.717012 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717017 | orchestrator | 2026-04-05 05:40:09.717023 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:40:09.717029 | orchestrator | Sunday 05 April 2026 05:40:00 +0000 (0:00:01.127) 0:26:37.245 ********** 2026-04-05 05:40:09.717034 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717047 | orchestrator | 2026-04-05 05:40:09.717053 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:40:09.717059 | orchestrator | Sunday 05 April 2026 05:40:01 +0000 (0:00:01.215) 0:26:38.461 ********** 2026-04-05 05:40:09.717064 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717070 | orchestrator | 2026-04-05 05:40:09.717089 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:40:09.717100 | orchestrator | Sunday 05 April 2026 05:40:02 +0000 (0:00:01.121) 0:26:39.583 ********** 2026-04-05 05:40:09.717109 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717119 | orchestrator | 2026-04-05 05:40:09.717129 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:40:09.717143 | orchestrator | Sunday 05 April 2026 05:40:03 +0000 (0:00:01.107) 0:26:40.691 ********** 2026-04-05 05:40:09.717153 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717163 | orchestrator | 2026-04-05 05:40:09.717173 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:40:09.717183 | orchestrator | Sunday 05 April 2026 05:40:05 +0000 (0:00:01.114) 0:26:41.805 ********** 2026-04-05 05:40:09.717192 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717202 | orchestrator | 2026-04-05 05:40:09.717212 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:40:09.717222 | orchestrator | Sunday 05 April 2026 05:40:06 +0000 (0:00:01.168) 0:26:42.974 ********** 2026-04-05 05:40:09.717232 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717242 | orchestrator | 2026-04-05 05:40:09.717252 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:40:09.717262 | orchestrator | Sunday 05 April 2026 05:40:07 +0000 (0:00:01.083) 0:26:44.058 ********** 2026-04-05 05:40:09.717272 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717281 | orchestrator | 2026-04-05 05:40:09.717291 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:40:09.717302 | orchestrator | Sunday 05 April 2026 05:40:08 +0000 (0:00:01.115) 0:26:45.174 ********** 2026-04-05 05:40:09.717313 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:09.717349 | orchestrator | 2026-04-05 05:40:09.717358 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:40:09.717368 | orchestrator | Sunday 05 April 2026 05:40:09 +0000 (0:00:01.118) 0:26:46.293 ********** 2026-04-05 05:40:09.717380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:09.717393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:09.717404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:09.717416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:40:09.717437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:09.717448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:09.717474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:10.931682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57f1796b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:40:10.931773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:10.931812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:40:10.931825 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:10.931838 | orchestrator | 2026-04-05 05:40:10.931850 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:40:10.931861 | orchestrator | Sunday 05 April 2026 05:40:10 +0000 (0:00:01.231) 0:26:47.524 ********** 2026-04-05 05:40:10.931874 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:10.931916 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:10.931930 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:10.931942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:10.931954 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:10.931973 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:10.931984 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:10.932012 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '57f1796b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1', 'scsi-SQEMU_QEMU_HARDDISK_57f1796b-7846-459b-ac21-4d82893b0fc1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:46.046234 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:46.046393 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:40:46.046413 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.046427 | orchestrator | 2026-04-05 05:40:46.046440 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:40:46.046453 | orchestrator | Sunday 05 April 2026 05:40:12 +0000 (0:00:01.225) 0:26:48.750 ********** 2026-04-05 05:40:46.046464 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.046475 | orchestrator | 2026-04-05 05:40:46.046486 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:40:46.046497 | orchestrator | Sunday 05 April 2026 05:40:13 +0000 (0:00:01.442) 0:26:50.192 ********** 2026-04-05 05:40:46.046508 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.046519 | orchestrator | 2026-04-05 05:40:46.046529 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:40:46.046540 | orchestrator | Sunday 05 April 2026 05:40:14 +0000 (0:00:01.095) 0:26:51.287 ********** 2026-04-05 05:40:46.046551 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.046562 | orchestrator | 2026-04-05 05:40:46.046572 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:40:46.046583 | orchestrator | Sunday 05 April 2026 05:40:16 +0000 (0:00:01.454) 0:26:52.741 ********** 2026-04-05 05:40:46.046594 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.046604 | orchestrator | 2026-04-05 05:40:46.046615 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:40:46.046642 | orchestrator | Sunday 05 April 2026 05:40:17 +0000 (0:00:01.138) 0:26:53.880 ********** 2026-04-05 05:40:46.046656 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.046669 | orchestrator | 2026-04-05 05:40:46.046683 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:40:46.046696 | orchestrator | Sunday 05 April 2026 05:40:18 +0000 (0:00:01.214) 0:26:55.095 ********** 2026-04-05 05:40:46.046709 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.046722 | orchestrator | 2026-04-05 05:40:46.046734 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:40:46.046747 | orchestrator | Sunday 05 April 2026 05:40:19 +0000 (0:00:01.201) 0:26:56.296 ********** 2026-04-05 05:40:46.046760 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-05 05:40:46.046773 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:40:46.046786 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-05 05:40:46.046798 | orchestrator | 2026-04-05 05:40:46.046811 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:40:46.046824 | orchestrator | Sunday 05 April 2026 05:40:21 +0000 (0:00:01.779) 0:26:58.075 ********** 2026-04-05 05:40:46.046837 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-05 05:40:46.046849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-05 05:40:46.046862 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-05 05:40:46.046882 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.046895 | orchestrator | 2026-04-05 05:40:46.046909 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:40:46.046921 | orchestrator | Sunday 05 April 2026 05:40:22 +0000 (0:00:01.192) 0:26:59.268 ********** 2026-04-05 05:40:46.046934 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.046947 | orchestrator | 2026-04-05 05:40:46.046959 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:40:46.046972 | orchestrator | Sunday 05 April 2026 05:40:23 +0000 (0:00:01.157) 0:27:00.425 ********** 2026-04-05 05:40:46.046985 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:40:46.046997 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:40:46.047008 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:40:46.047018 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:40:46.047029 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:40:46.047040 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:40:46.047071 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:40:46.047082 | orchestrator | 2026-04-05 05:40:46.047093 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:40:46.047104 | orchestrator | Sunday 05 April 2026 05:40:25 +0000 (0:00:02.226) 0:27:02.652 ********** 2026-04-05 05:40:46.047115 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:40:46.047125 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:40:46.047136 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:40:46.047147 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:40:46.047157 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:40:46.047201 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:40:46.047214 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:40:46.047225 | orchestrator | 2026-04-05 05:40:46.047236 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:40:46.047246 | orchestrator | Sunday 05 April 2026 05:40:28 +0000 (0:00:02.399) 0:27:05.052 ********** 2026-04-05 05:40:46.047257 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-05 05:40:46.047268 | orchestrator | 2026-04-05 05:40:46.047278 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:40:46.047289 | orchestrator | Sunday 05 April 2026 05:40:29 +0000 (0:00:01.123) 0:27:06.175 ********** 2026-04-05 05:40:46.047300 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-05 05:40:46.047310 | orchestrator | 2026-04-05 05:40:46.047321 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:40:46.047331 | orchestrator | Sunday 05 April 2026 05:40:30 +0000 (0:00:01.329) 0:27:07.504 ********** 2026-04-05 05:40:46.047342 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.047353 | orchestrator | 2026-04-05 05:40:46.047363 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:40:46.047374 | orchestrator | Sunday 05 April 2026 05:40:32 +0000 (0:00:01.624) 0:27:09.129 ********** 2026-04-05 05:40:46.047385 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047396 | orchestrator | 2026-04-05 05:40:46.047406 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:40:46.047417 | orchestrator | Sunday 05 April 2026 05:40:33 +0000 (0:00:01.142) 0:27:10.272 ********** 2026-04-05 05:40:46.047435 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047446 | orchestrator | 2026-04-05 05:40:46.047457 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:40:46.047467 | orchestrator | Sunday 05 April 2026 05:40:34 +0000 (0:00:01.148) 0:27:11.421 ********** 2026-04-05 05:40:46.047478 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047489 | orchestrator | 2026-04-05 05:40:46.047505 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:40:46.047516 | orchestrator | Sunday 05 April 2026 05:40:35 +0000 (0:00:01.118) 0:27:12.539 ********** 2026-04-05 05:40:46.047527 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.047538 | orchestrator | 2026-04-05 05:40:46.047548 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:40:46.047559 | orchestrator | Sunday 05 April 2026 05:40:37 +0000 (0:00:01.542) 0:27:14.082 ********** 2026-04-05 05:40:46.047569 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047580 | orchestrator | 2026-04-05 05:40:46.047591 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:40:46.047602 | orchestrator | Sunday 05 April 2026 05:40:38 +0000 (0:00:01.127) 0:27:15.209 ********** 2026-04-05 05:40:46.047612 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047623 | orchestrator | 2026-04-05 05:40:46.047633 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:40:46.047644 | orchestrator | Sunday 05 April 2026 05:40:39 +0000 (0:00:01.138) 0:27:16.347 ********** 2026-04-05 05:40:46.047655 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.047665 | orchestrator | 2026-04-05 05:40:46.047676 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:40:46.047687 | orchestrator | Sunday 05 April 2026 05:40:41 +0000 (0:00:01.613) 0:27:17.961 ********** 2026-04-05 05:40:46.047701 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.047719 | orchestrator | 2026-04-05 05:40:46.047736 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:40:46.047756 | orchestrator | Sunday 05 April 2026 05:40:42 +0000 (0:00:01.552) 0:27:19.514 ********** 2026-04-05 05:40:46.047774 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047792 | orchestrator | 2026-04-05 05:40:46.047804 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:40:46.047814 | orchestrator | Sunday 05 April 2026 05:40:43 +0000 (0:00:00.777) 0:27:20.291 ********** 2026-04-05 05:40:46.047825 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:40:46.047836 | orchestrator | 2026-04-05 05:40:46.047846 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:40:46.047857 | orchestrator | Sunday 05 April 2026 05:40:44 +0000 (0:00:00.798) 0:27:21.090 ********** 2026-04-05 05:40:46.047867 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047878 | orchestrator | 2026-04-05 05:40:46.047889 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:40:46.047899 | orchestrator | Sunday 05 April 2026 05:40:45 +0000 (0:00:00.844) 0:27:21.934 ********** 2026-04-05 05:40:46.047910 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:40:46.047921 | orchestrator | 2026-04-05 05:40:46.047932 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:40:46.047942 | orchestrator | Sunday 05 April 2026 05:40:45 +0000 (0:00:00.767) 0:27:22.702 ********** 2026-04-05 05:40:46.047960 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.266464 | orchestrator | 2026-04-05 05:41:26.266597 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:41:26.266616 | orchestrator | Sunday 05 April 2026 05:40:46 +0000 (0:00:00.761) 0:27:23.463 ********** 2026-04-05 05:41:26.266643 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.266656 | orchestrator | 2026-04-05 05:41:26.266667 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:41:26.266678 | orchestrator | Sunday 05 April 2026 05:40:47 +0000 (0:00:00.834) 0:27:24.298 ********** 2026-04-05 05:41:26.266715 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.266726 | orchestrator | 2026-04-05 05:41:26.266737 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:41:26.266748 | orchestrator | Sunday 05 April 2026 05:40:48 +0000 (0:00:00.787) 0:27:25.086 ********** 2026-04-05 05:41:26.266759 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.266771 | orchestrator | 2026-04-05 05:41:26.266785 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:41:26.266805 | orchestrator | Sunday 05 April 2026 05:40:49 +0000 (0:00:00.792) 0:27:25.878 ********** 2026-04-05 05:41:26.266825 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.266849 | orchestrator | 2026-04-05 05:41:26.266878 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:41:26.266897 | orchestrator | Sunday 05 April 2026 05:40:49 +0000 (0:00:00.789) 0:27:26.668 ********** 2026-04-05 05:41:26.266917 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.266937 | orchestrator | 2026-04-05 05:41:26.266958 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:41:26.266978 | orchestrator | Sunday 05 April 2026 05:40:50 +0000 (0:00:00.835) 0:27:27.504 ********** 2026-04-05 05:41:26.266998 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267046 | orchestrator | 2026-04-05 05:41:26.267067 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:41:26.267079 | orchestrator | Sunday 05 April 2026 05:40:51 +0000 (0:00:00.755) 0:27:28.260 ********** 2026-04-05 05:41:26.267090 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267101 | orchestrator | 2026-04-05 05:41:26.267112 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:41:26.267123 | orchestrator | Sunday 05 April 2026 05:40:52 +0000 (0:00:00.793) 0:27:29.053 ********** 2026-04-05 05:41:26.267134 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267144 | orchestrator | 2026-04-05 05:41:26.267156 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:41:26.267167 | orchestrator | Sunday 05 April 2026 05:40:53 +0000 (0:00:00.744) 0:27:29.798 ********** 2026-04-05 05:41:26.267178 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267189 | orchestrator | 2026-04-05 05:41:26.267199 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:41:26.267210 | orchestrator | Sunday 05 April 2026 05:40:53 +0000 (0:00:00.755) 0:27:30.553 ********** 2026-04-05 05:41:26.267221 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267231 | orchestrator | 2026-04-05 05:41:26.267242 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:41:26.267269 | orchestrator | Sunday 05 April 2026 05:40:54 +0000 (0:00:00.766) 0:27:31.320 ********** 2026-04-05 05:41:26.267280 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267291 | orchestrator | 2026-04-05 05:41:26.267302 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:41:26.267312 | orchestrator | Sunday 05 April 2026 05:40:55 +0000 (0:00:00.714) 0:27:32.034 ********** 2026-04-05 05:41:26.267323 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267334 | orchestrator | 2026-04-05 05:41:26.267344 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:41:26.267356 | orchestrator | Sunday 05 April 2026 05:40:56 +0000 (0:00:00.733) 0:27:32.768 ********** 2026-04-05 05:41:26.267367 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267378 | orchestrator | 2026-04-05 05:41:26.267388 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:41:26.267399 | orchestrator | Sunday 05 April 2026 05:40:56 +0000 (0:00:00.779) 0:27:33.548 ********** 2026-04-05 05:41:26.267410 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267421 | orchestrator | 2026-04-05 05:41:26.267439 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:41:26.267472 | orchestrator | Sunday 05 April 2026 05:40:57 +0000 (0:00:00.768) 0:27:34.316 ********** 2026-04-05 05:41:26.267490 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267509 | orchestrator | 2026-04-05 05:41:26.267526 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:41:26.267544 | orchestrator | Sunday 05 April 2026 05:40:58 +0000 (0:00:00.801) 0:27:35.117 ********** 2026-04-05 05:41:26.267563 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267582 | orchestrator | 2026-04-05 05:41:26.267602 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:41:26.267621 | orchestrator | Sunday 05 April 2026 05:40:59 +0000 (0:00:00.815) 0:27:35.932 ********** 2026-04-05 05:41:26.267635 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267646 | orchestrator | 2026-04-05 05:41:26.267656 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:41:26.267667 | orchestrator | Sunday 05 April 2026 05:40:59 +0000 (0:00:00.759) 0:27:36.692 ********** 2026-04-05 05:41:26.267678 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.267689 | orchestrator | 2026-04-05 05:41:26.267700 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:41:26.267716 | orchestrator | Sunday 05 April 2026 05:41:01 +0000 (0:00:01.821) 0:27:38.514 ********** 2026-04-05 05:41:26.267726 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.267737 | orchestrator | 2026-04-05 05:41:26.267748 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:41:26.267759 | orchestrator | Sunday 05 April 2026 05:41:03 +0000 (0:00:02.166) 0:27:40.680 ********** 2026-04-05 05:41:26.267770 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-05 05:41:26.267782 | orchestrator | 2026-04-05 05:41:26.267814 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:41:26.267826 | orchestrator | Sunday 05 April 2026 05:41:05 +0000 (0:00:01.151) 0:27:41.831 ********** 2026-04-05 05:41:26.267837 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267848 | orchestrator | 2026-04-05 05:41:26.267859 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:41:26.267869 | orchestrator | Sunday 05 April 2026 05:41:06 +0000 (0:00:01.227) 0:27:43.059 ********** 2026-04-05 05:41:26.267880 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.267890 | orchestrator | 2026-04-05 05:41:26.267901 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:41:26.267912 | orchestrator | Sunday 05 April 2026 05:41:07 +0000 (0:00:01.216) 0:27:44.276 ********** 2026-04-05 05:41:26.267923 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:41:26.267933 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:41:26.267944 | orchestrator | 2026-04-05 05:41:26.267955 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:41:26.267965 | orchestrator | Sunday 05 April 2026 05:41:09 +0000 (0:00:01.831) 0:27:46.108 ********** 2026-04-05 05:41:26.267976 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.267987 | orchestrator | 2026-04-05 05:41:26.267997 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:41:26.268008 | orchestrator | Sunday 05 April 2026 05:41:10 +0000 (0:00:01.439) 0:27:47.547 ********** 2026-04-05 05:41:26.268043 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268055 | orchestrator | 2026-04-05 05:41:26.268065 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:41:26.268076 | orchestrator | Sunday 05 April 2026 05:41:12 +0000 (0:00:01.179) 0:27:48.726 ********** 2026-04-05 05:41:26.268087 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268097 | orchestrator | 2026-04-05 05:41:26.268108 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:41:26.268119 | orchestrator | Sunday 05 April 2026 05:41:12 +0000 (0:00:00.766) 0:27:49.493 ********** 2026-04-05 05:41:26.268138 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268149 | orchestrator | 2026-04-05 05:41:26.268160 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:41:26.268171 | orchestrator | Sunday 05 April 2026 05:41:13 +0000 (0:00:00.767) 0:27:50.260 ********** 2026-04-05 05:41:26.268182 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-05 05:41:26.268192 | orchestrator | 2026-04-05 05:41:26.268203 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:41:26.268214 | orchestrator | Sunday 05 April 2026 05:41:14 +0000 (0:00:01.109) 0:27:51.370 ********** 2026-04-05 05:41:26.268224 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.268235 | orchestrator | 2026-04-05 05:41:26.268246 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:41:26.268257 | orchestrator | Sunday 05 April 2026 05:41:16 +0000 (0:00:01.725) 0:27:53.095 ********** 2026-04-05 05:41:26.268268 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:41:26.268279 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:41:26.268290 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:41:26.268300 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268311 | orchestrator | 2026-04-05 05:41:26.268322 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:41:26.268333 | orchestrator | Sunday 05 April 2026 05:41:17 +0000 (0:00:01.143) 0:27:54.239 ********** 2026-04-05 05:41:26.268344 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268355 | orchestrator | 2026-04-05 05:41:26.268365 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:41:26.268376 | orchestrator | Sunday 05 April 2026 05:41:18 +0000 (0:00:01.114) 0:27:55.353 ********** 2026-04-05 05:41:26.268387 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268398 | orchestrator | 2026-04-05 05:41:26.268409 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:41:26.268419 | orchestrator | Sunday 05 April 2026 05:41:19 +0000 (0:00:01.176) 0:27:56.530 ********** 2026-04-05 05:41:26.268430 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268441 | orchestrator | 2026-04-05 05:41:26.268451 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:41:26.268462 | orchestrator | Sunday 05 April 2026 05:41:21 +0000 (0:00:01.211) 0:27:57.742 ********** 2026-04-05 05:41:26.268473 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268483 | orchestrator | 2026-04-05 05:41:26.268494 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:41:26.268505 | orchestrator | Sunday 05 April 2026 05:41:22 +0000 (0:00:01.193) 0:27:58.935 ********** 2026-04-05 05:41:26.268516 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:41:26.268526 | orchestrator | 2026-04-05 05:41:26.268537 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:41:26.268548 | orchestrator | Sunday 05 April 2026 05:41:23 +0000 (0:00:00.795) 0:27:59.730 ********** 2026-04-05 05:41:26.268559 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.268570 | orchestrator | 2026-04-05 05:41:26.268583 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:41:26.268602 | orchestrator | Sunday 05 April 2026 05:41:25 +0000 (0:00:02.227) 0:28:01.958 ********** 2026-04-05 05:41:26.268620 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:41:26.268638 | orchestrator | 2026-04-05 05:41:26.268710 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:41:26.268734 | orchestrator | Sunday 05 April 2026 05:41:26 +0000 (0:00:00.796) 0:28:02.754 ********** 2026-04-05 05:41:26.268754 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-05 05:41:26.268774 | orchestrator | 2026-04-05 05:41:26.268804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:42:03.408751 | orchestrator | Sunday 05 April 2026 05:41:27 +0000 (0:00:01.160) 0:28:03.914 ********** 2026-04-05 05:42:03.408866 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.408939 | orchestrator | 2026-04-05 05:42:03.408952 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:42:03.408964 | orchestrator | Sunday 05 April 2026 05:41:28 +0000 (0:00:01.256) 0:28:05.171 ********** 2026-04-05 05:42:03.408975 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.408986 | orchestrator | 2026-04-05 05:42:03.408997 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:42:03.409008 | orchestrator | Sunday 05 April 2026 05:41:29 +0000 (0:00:01.175) 0:28:06.347 ********** 2026-04-05 05:42:03.409019 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409029 | orchestrator | 2026-04-05 05:42:03.409040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:42:03.409051 | orchestrator | Sunday 05 April 2026 05:41:30 +0000 (0:00:01.210) 0:28:07.557 ********** 2026-04-05 05:42:03.409062 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409072 | orchestrator | 2026-04-05 05:42:03.409083 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:42:03.409094 | orchestrator | Sunday 05 April 2026 05:41:32 +0000 (0:00:01.191) 0:28:08.749 ********** 2026-04-05 05:42:03.409105 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409115 | orchestrator | 2026-04-05 05:42:03.409126 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:42:03.409137 | orchestrator | Sunday 05 April 2026 05:41:33 +0000 (0:00:01.176) 0:28:09.926 ********** 2026-04-05 05:42:03.409147 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409158 | orchestrator | 2026-04-05 05:42:03.409169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:42:03.409179 | orchestrator | Sunday 05 April 2026 05:41:34 +0000 (0:00:01.173) 0:28:11.100 ********** 2026-04-05 05:42:03.409190 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409201 | orchestrator | 2026-04-05 05:42:03.409212 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:42:03.409222 | orchestrator | Sunday 05 April 2026 05:41:35 +0000 (0:00:01.170) 0:28:12.270 ********** 2026-04-05 05:42:03.409233 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409243 | orchestrator | 2026-04-05 05:42:03.409254 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:42:03.409265 | orchestrator | Sunday 05 April 2026 05:41:36 +0000 (0:00:01.217) 0:28:13.488 ********** 2026-04-05 05:42:03.409276 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:42:03.409289 | orchestrator | 2026-04-05 05:42:03.409302 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:42:03.409316 | orchestrator | Sunday 05 April 2026 05:41:37 +0000 (0:00:00.821) 0:28:14.309 ********** 2026-04-05 05:42:03.409329 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-05 05:42:03.409342 | orchestrator | 2026-04-05 05:42:03.409356 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:42:03.409384 | orchestrator | Sunday 05 April 2026 05:41:38 +0000 (0:00:01.115) 0:28:15.424 ********** 2026-04-05 05:42:03.409398 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-05 05:42:03.409412 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-05 05:42:03.409425 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-05 05:42:03.409439 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-05 05:42:03.409451 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-05 05:42:03.409464 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-05 05:42:03.409477 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-05 05:42:03.409490 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:42:03.409525 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:42:03.409539 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:42:03.409552 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:42:03.409564 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:42:03.409576 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:42:03.409589 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:42:03.409602 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-05 05:42:03.409614 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-05 05:42:03.409627 | orchestrator | 2026-04-05 05:42:03.409640 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:42:03.409653 | orchestrator | Sunday 05 April 2026 05:41:45 +0000 (0:00:06.653) 0:28:22.078 ********** 2026-04-05 05:42:03.409664 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409675 | orchestrator | 2026-04-05 05:42:03.409685 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:42:03.409696 | orchestrator | Sunday 05 April 2026 05:41:46 +0000 (0:00:00.799) 0:28:22.877 ********** 2026-04-05 05:42:03.409706 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409717 | orchestrator | 2026-04-05 05:42:03.409728 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:42:03.409738 | orchestrator | Sunday 05 April 2026 05:41:46 +0000 (0:00:00.775) 0:28:23.653 ********** 2026-04-05 05:42:03.409749 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409759 | orchestrator | 2026-04-05 05:42:03.409770 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:42:03.409781 | orchestrator | Sunday 05 April 2026 05:41:47 +0000 (0:00:00.810) 0:28:24.464 ********** 2026-04-05 05:42:03.409792 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409802 | orchestrator | 2026-04-05 05:42:03.409813 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:42:03.409842 | orchestrator | Sunday 05 April 2026 05:41:48 +0000 (0:00:00.844) 0:28:25.309 ********** 2026-04-05 05:42:03.409854 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409865 | orchestrator | 2026-04-05 05:42:03.409875 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:42:03.409909 | orchestrator | Sunday 05 April 2026 05:41:49 +0000 (0:00:00.786) 0:28:26.095 ********** 2026-04-05 05:42:03.409920 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409931 | orchestrator | 2026-04-05 05:42:03.409942 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:42:03.409953 | orchestrator | Sunday 05 April 2026 05:41:50 +0000 (0:00:00.771) 0:28:26.866 ********** 2026-04-05 05:42:03.409963 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.409974 | orchestrator | 2026-04-05 05:42:03.409985 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:42:03.409996 | orchestrator | Sunday 05 April 2026 05:41:50 +0000 (0:00:00.779) 0:28:27.646 ********** 2026-04-05 05:42:03.410007 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410075 | orchestrator | 2026-04-05 05:42:03.410088 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:42:03.410099 | orchestrator | Sunday 05 April 2026 05:41:51 +0000 (0:00:00.852) 0:28:28.498 ********** 2026-04-05 05:42:03.410110 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410121 | orchestrator | 2026-04-05 05:42:03.410131 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:42:03.410142 | orchestrator | Sunday 05 April 2026 05:41:52 +0000 (0:00:00.768) 0:28:29.266 ********** 2026-04-05 05:42:03.410153 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410163 | orchestrator | 2026-04-05 05:42:03.410174 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:42:03.410192 | orchestrator | Sunday 05 April 2026 05:41:53 +0000 (0:00:00.753) 0:28:30.020 ********** 2026-04-05 05:42:03.410203 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410213 | orchestrator | 2026-04-05 05:42:03.410224 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:42:03.410235 | orchestrator | Sunday 05 April 2026 05:41:54 +0000 (0:00:00.763) 0:28:30.784 ********** 2026-04-05 05:42:03.410245 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410256 | orchestrator | 2026-04-05 05:42:03.410267 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:42:03.410277 | orchestrator | Sunday 05 April 2026 05:41:54 +0000 (0:00:00.775) 0:28:31.559 ********** 2026-04-05 05:42:03.410288 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410298 | orchestrator | 2026-04-05 05:42:03.410309 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:42:03.410320 | orchestrator | Sunday 05 April 2026 05:41:55 +0000 (0:00:00.872) 0:28:32.431 ********** 2026-04-05 05:42:03.410330 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410341 | orchestrator | 2026-04-05 05:42:03.410352 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:42:03.410368 | orchestrator | Sunday 05 April 2026 05:41:56 +0000 (0:00:00.769) 0:28:33.201 ********** 2026-04-05 05:42:03.410379 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410390 | orchestrator | 2026-04-05 05:42:03.410400 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:42:03.410411 | orchestrator | Sunday 05 April 2026 05:41:57 +0000 (0:00:00.874) 0:28:34.075 ********** 2026-04-05 05:42:03.410422 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410432 | orchestrator | 2026-04-05 05:42:03.410443 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:42:03.410453 | orchestrator | Sunday 05 April 2026 05:41:58 +0000 (0:00:00.772) 0:28:34.847 ********** 2026-04-05 05:42:03.410464 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410475 | orchestrator | 2026-04-05 05:42:03.410486 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:42:03.410498 | orchestrator | Sunday 05 April 2026 05:41:58 +0000 (0:00:00.767) 0:28:35.615 ********** 2026-04-05 05:42:03.410509 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410520 | orchestrator | 2026-04-05 05:42:03.410530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:42:03.410541 | orchestrator | Sunday 05 April 2026 05:41:59 +0000 (0:00:00.775) 0:28:36.391 ********** 2026-04-05 05:42:03.410551 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410562 | orchestrator | 2026-04-05 05:42:03.410573 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:42:03.410583 | orchestrator | Sunday 05 April 2026 05:42:00 +0000 (0:00:00.794) 0:28:37.185 ********** 2026-04-05 05:42:03.410594 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410604 | orchestrator | 2026-04-05 05:42:03.410615 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:42:03.410626 | orchestrator | Sunday 05 April 2026 05:42:01 +0000 (0:00:00.775) 0:28:37.961 ********** 2026-04-05 05:42:03.410636 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410647 | orchestrator | 2026-04-05 05:42:03.410658 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:42:03.410669 | orchestrator | Sunday 05 April 2026 05:42:02 +0000 (0:00:00.844) 0:28:38.805 ********** 2026-04-05 05:42:03.410679 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:42:03.410690 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:42:03.410701 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:42:03.410711 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:42:03.410730 | orchestrator | 2026-04-05 05:42:03.410741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:42:03.410752 | orchestrator | Sunday 05 April 2026 05:42:03 +0000 (0:00:01.054) 0:28:39.860 ********** 2026-04-05 05:42:03.410762 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:42:03.410781 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:43:02.922318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:43:02.922432 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.922449 | orchestrator | 2026-04-05 05:43:02.922463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:43:02.922475 | orchestrator | Sunday 05 April 2026 05:42:04 +0000 (0:00:01.063) 0:28:40.923 ********** 2026-04-05 05:43:02.922487 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-05 05:43:02.922498 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-05 05:43:02.922509 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-05 05:43:02.922520 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.922531 | orchestrator | 2026-04-05 05:43:02.922543 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:43:02.922554 | orchestrator | Sunday 05 April 2026 05:42:05 +0000 (0:00:01.066) 0:28:41.990 ********** 2026-04-05 05:43:02.922565 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.922576 | orchestrator | 2026-04-05 05:43:02.922587 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:43:02.922597 | orchestrator | Sunday 05 April 2026 05:42:06 +0000 (0:00:00.792) 0:28:42.782 ********** 2026-04-05 05:43:02.922609 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-05 05:43:02.922620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.922631 | orchestrator | 2026-04-05 05:43:02.922642 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:43:02.922653 | orchestrator | Sunday 05 April 2026 05:42:06 +0000 (0:00:00.917) 0:28:43.700 ********** 2026-04-05 05:43:02.922663 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:43:02.922675 | orchestrator | 2026-04-05 05:43:02.922731 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:43:02.922743 | orchestrator | Sunday 05 April 2026 05:42:08 +0000 (0:00:01.407) 0:28:45.107 ********** 2026-04-05 05:43:02.922754 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:43:02.922766 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-05 05:43:02.922777 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:43:02.922788 | orchestrator | 2026-04-05 05:43:02.922799 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 05:43:02.922810 | orchestrator | Sunday 05 April 2026 05:42:10 +0000 (0:00:01.748) 0:28:46.856 ********** 2026-04-05 05:43:02.922821 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-04-05 05:43:02.922832 | orchestrator | 2026-04-05 05:43:02.922843 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-05 05:43:02.922854 | orchestrator | Sunday 05 April 2026 05:42:11 +0000 (0:00:01.187) 0:28:48.044 ********** 2026-04-05 05:43:02.922865 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:43:02.922875 | orchestrator | 2026-04-05 05:43:02.922902 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-05 05:43:02.922913 | orchestrator | Sunday 05 April 2026 05:42:12 +0000 (0:00:01.549) 0:28:49.594 ********** 2026-04-05 05:43:02.922924 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.922935 | orchestrator | 2026-04-05 05:43:02.922946 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-05 05:43:02.922957 | orchestrator | Sunday 05 April 2026 05:42:14 +0000 (0:00:01.190) 0:28:50.785 ********** 2026-04-05 05:43:02.922968 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:43:02.923011 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:43:02.923031 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:43:02.923051 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-04-05 05:43:02.923069 | orchestrator | 2026-04-05 05:43:02.923087 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-05 05:43:02.923102 | orchestrator | Sunday 05 April 2026 05:42:21 +0000 (0:00:07.753) 0:28:58.538 ********** 2026-04-05 05:43:02.923113 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:43:02.923124 | orchestrator | 2026-04-05 05:43:02.923135 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-05 05:43:02.923145 | orchestrator | Sunday 05 April 2026 05:42:23 +0000 (0:00:01.184) 0:28:59.723 ********** 2026-04-05 05:43:02.923156 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 05:43:02.923167 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-05 05:43:02.923177 | orchestrator | 2026-04-05 05:43:02.923188 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-05 05:43:02.923199 | orchestrator | Sunday 05 April 2026 05:42:26 +0000 (0:00:03.224) 0:29:02.947 ********** 2026-04-05 05:43:02.923210 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 05:43:02.923220 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-05 05:43:02.923231 | orchestrator | 2026-04-05 05:43:02.923242 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-05 05:43:02.923252 | orchestrator | Sunday 05 April 2026 05:42:28 +0000 (0:00:02.040) 0:29:04.988 ********** 2026-04-05 05:43:02.923263 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:43:02.923274 | orchestrator | 2026-04-05 05:43:02.923284 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-05 05:43:02.923295 | orchestrator | Sunday 05 April 2026 05:42:29 +0000 (0:00:01.553) 0:29:06.541 ********** 2026-04-05 05:43:02.923306 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.923316 | orchestrator | 2026-04-05 05:43:02.923327 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 05:43:02.923338 | orchestrator | Sunday 05 April 2026 05:42:30 +0000 (0:00:00.814) 0:29:07.356 ********** 2026-04-05 05:43:02.923348 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.923359 | orchestrator | 2026-04-05 05:43:02.923370 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 05:43:02.923397 | orchestrator | Sunday 05 April 2026 05:42:31 +0000 (0:00:00.772) 0:29:08.129 ********** 2026-04-05 05:43:02.923409 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-04-05 05:43:02.923420 | orchestrator | 2026-04-05 05:43:02.923430 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-05 05:43:02.923441 | orchestrator | Sunday 05 April 2026 05:42:32 +0000 (0:00:01.130) 0:29:09.260 ********** 2026-04-05 05:43:02.923452 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.923462 | orchestrator | 2026-04-05 05:43:02.923473 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-05 05:43:02.923484 | orchestrator | Sunday 05 April 2026 05:42:33 +0000 (0:00:01.176) 0:29:10.437 ********** 2026-04-05 05:43:02.923495 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.923505 | orchestrator | 2026-04-05 05:43:02.923516 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-05 05:43:02.923527 | orchestrator | Sunday 05 April 2026 05:42:34 +0000 (0:00:01.215) 0:29:11.652 ********** 2026-04-05 05:43:02.923537 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-04-05 05:43:02.923548 | orchestrator | 2026-04-05 05:43:02.923559 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-05 05:43:02.923570 | orchestrator | Sunday 05 April 2026 05:42:36 +0000 (0:00:01.156) 0:29:12.809 ********** 2026-04-05 05:43:02.923581 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:43:02.923602 | orchestrator | 2026-04-05 05:43:02.923613 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-05 05:43:02.923624 | orchestrator | Sunday 05 April 2026 05:42:38 +0000 (0:00:02.581) 0:29:15.391 ********** 2026-04-05 05:43:02.923634 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:43:02.923645 | orchestrator | 2026-04-05 05:43:02.923656 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-05 05:43:02.923666 | orchestrator | Sunday 05 April 2026 05:42:40 +0000 (0:00:02.038) 0:29:17.429 ********** 2026-04-05 05:43:02.923677 | orchestrator | ok: [testbed-node-1] 2026-04-05 05:43:02.923724 | orchestrator | 2026-04-05 05:43:02.923735 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-05 05:43:02.923746 | orchestrator | Sunday 05 April 2026 05:42:43 +0000 (0:00:02.516) 0:29:19.946 ********** 2026-04-05 05:43:02.923756 | orchestrator | changed: [testbed-node-1] 2026-04-05 05:43:02.923767 | orchestrator | 2026-04-05 05:43:02.923777 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 05:43:02.923788 | orchestrator | Sunday 05 April 2026 05:42:46 +0000 (0:00:03.705) 0:29:23.651 ********** 2026-04-05 05:43:02.923799 | orchestrator | skipping: [testbed-node-1] 2026-04-05 05:43:02.923809 | orchestrator | 2026-04-05 05:43:02.923820 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-05 05:43:02.923831 | orchestrator | 2026-04-05 05:43:02.923841 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-05 05:43:02.923852 | orchestrator | Sunday 05 April 2026 05:42:47 +0000 (0:00:01.027) 0:29:24.678 ********** 2026-04-05 05:43:02.923869 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:43:02.923880 | orchestrator | 2026-04-05 05:43:02.923890 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-05 05:43:02.923901 | orchestrator | Sunday 05 April 2026 05:42:50 +0000 (0:00:02.551) 0:29:27.230 ********** 2026-04-05 05:43:02.923911 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:43:02.923922 | orchestrator | 2026-04-05 05:43:02.923933 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:43:02.923943 | orchestrator | Sunday 05 April 2026 05:42:52 +0000 (0:00:02.202) 0:29:29.432 ********** 2026-04-05 05:43:02.923954 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-05 05:43:02.923964 | orchestrator | 2026-04-05 05:43:02.923975 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:43:02.923986 | orchestrator | Sunday 05 April 2026 05:42:53 +0000 (0:00:01.153) 0:29:30.586 ********** 2026-04-05 05:43:02.923996 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:02.924007 | orchestrator | 2026-04-05 05:43:02.924018 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:43:02.924028 | orchestrator | Sunday 05 April 2026 05:42:55 +0000 (0:00:01.514) 0:29:32.101 ********** 2026-04-05 05:43:02.924039 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:02.924049 | orchestrator | 2026-04-05 05:43:02.924060 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:43:02.924071 | orchestrator | Sunday 05 April 2026 05:42:56 +0000 (0:00:01.259) 0:29:33.360 ********** 2026-04-05 05:43:02.924081 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:02.924092 | orchestrator | 2026-04-05 05:43:02.924103 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:43:02.924113 | orchestrator | Sunday 05 April 2026 05:42:58 +0000 (0:00:01.458) 0:29:34.819 ********** 2026-04-05 05:43:02.924124 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:02.924135 | orchestrator | 2026-04-05 05:43:02.924146 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:43:02.924156 | orchestrator | Sunday 05 April 2026 05:42:59 +0000 (0:00:01.226) 0:29:36.046 ********** 2026-04-05 05:43:02.924167 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:02.924178 | orchestrator | 2026-04-05 05:43:02.924189 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:43:02.924206 | orchestrator | Sunday 05 April 2026 05:43:00 +0000 (0:00:01.153) 0:29:37.199 ********** 2026-04-05 05:43:02.924217 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:02.924227 | orchestrator | 2026-04-05 05:43:02.924238 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:43:02.924249 | orchestrator | Sunday 05 April 2026 05:43:01 +0000 (0:00:01.122) 0:29:38.322 ********** 2026-04-05 05:43:02.924259 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:02.924270 | orchestrator | 2026-04-05 05:43:02.924281 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:43:02.924291 | orchestrator | Sunday 05 April 2026 05:43:02 +0000 (0:00:01.150) 0:29:39.472 ********** 2026-04-05 05:43:02.924302 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:02.924313 | orchestrator | 2026-04-05 05:43:02.924329 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:43:27.938797 | orchestrator | Sunday 05 April 2026 05:43:03 +0000 (0:00:01.130) 0:29:40.603 ********** 2026-04-05 05:43:27.938894 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:43:27.938907 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:43:27.938917 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:43:27.938926 | orchestrator | 2026-04-05 05:43:27.938936 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:43:27.938945 | orchestrator | Sunday 05 April 2026 05:43:05 +0000 (0:00:01.729) 0:29:42.332 ********** 2026-04-05 05:43:27.938954 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:27.938963 | orchestrator | 2026-04-05 05:43:27.938972 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:43:27.938980 | orchestrator | Sunday 05 April 2026 05:43:06 +0000 (0:00:01.300) 0:29:43.633 ********** 2026-04-05 05:43:27.938989 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:43:27.938998 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:43:27.939006 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:43:27.939015 | orchestrator | 2026-04-05 05:43:27.939023 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:43:27.939032 | orchestrator | Sunday 05 April 2026 05:43:10 +0000 (0:00:03.175) 0:29:46.809 ********** 2026-04-05 05:43:27.939042 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:43:27.939051 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:43:27.939060 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:43:27.939069 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939078 | orchestrator | 2026-04-05 05:43:27.939086 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:43:27.939095 | orchestrator | Sunday 05 April 2026 05:43:11 +0000 (0:00:01.451) 0:29:48.260 ********** 2026-04-05 05:43:27.939105 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:43:27.939116 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:43:27.939140 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:43:27.939149 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939157 | orchestrator | 2026-04-05 05:43:27.939166 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:43:27.939195 | orchestrator | Sunday 05 April 2026 05:43:13 +0000 (0:00:01.990) 0:29:50.250 ********** 2026-04-05 05:43:27.939206 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:27.939218 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:27.939227 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:27.939236 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939244 | orchestrator | 2026-04-05 05:43:27.939253 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:43:27.939262 | orchestrator | Sunday 05 April 2026 05:43:14 +0000 (0:00:01.141) 0:29:51.392 ********** 2026-04-05 05:43:27.939290 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:43:07.411834', 'end': '2026-04-05 05:43:07.459687', 'delta': '0:00:00.047853', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:43:27.939304 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:43:07.952896', 'end': '2026-04-05 05:43:07.996536', 'delta': '0:00:00.043640', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:43:27.939317 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:43:08.864011', 'end': '2026-04-05 05:43:08.910290', 'delta': '0:00:00.046279', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:43:27.939334 | orchestrator | 2026-04-05 05:43:27.939343 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:43:27.939353 | orchestrator | Sunday 05 April 2026 05:43:15 +0000 (0:00:01.207) 0:29:52.600 ********** 2026-04-05 05:43:27.939363 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:27.939373 | orchestrator | 2026-04-05 05:43:27.939384 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:43:27.939393 | orchestrator | Sunday 05 April 2026 05:43:17 +0000 (0:00:01.712) 0:29:54.313 ********** 2026-04-05 05:43:27.939404 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939414 | orchestrator | 2026-04-05 05:43:27.939425 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:43:27.939435 | orchestrator | Sunday 05 April 2026 05:43:18 +0000 (0:00:01.278) 0:29:55.591 ********** 2026-04-05 05:43:27.939444 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:27.939454 | orchestrator | 2026-04-05 05:43:27.939464 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:43:27.939474 | orchestrator | Sunday 05 April 2026 05:43:20 +0000 (0:00:01.164) 0:29:56.755 ********** 2026-04-05 05:43:27.939483 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:43:27.939493 | orchestrator | 2026-04-05 05:43:27.939503 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:43:27.939514 | orchestrator | Sunday 05 April 2026 05:43:22 +0000 (0:00:01.975) 0:29:58.730 ********** 2026-04-05 05:43:27.939523 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:27.939533 | orchestrator | 2026-04-05 05:43:27.939543 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:43:27.939553 | orchestrator | Sunday 05 April 2026 05:43:23 +0000 (0:00:01.128) 0:29:59.859 ********** 2026-04-05 05:43:27.939564 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939573 | orchestrator | 2026-04-05 05:43:27.939583 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:43:27.939593 | orchestrator | Sunday 05 April 2026 05:43:24 +0000 (0:00:01.154) 0:30:01.013 ********** 2026-04-05 05:43:27.939641 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939653 | orchestrator | 2026-04-05 05:43:27.939663 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:43:27.939673 | orchestrator | Sunday 05 April 2026 05:43:25 +0000 (0:00:01.230) 0:30:02.244 ********** 2026-04-05 05:43:27.939683 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939693 | orchestrator | 2026-04-05 05:43:27.939703 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:43:27.939711 | orchestrator | Sunday 05 April 2026 05:43:26 +0000 (0:00:01.122) 0:30:03.367 ********** 2026-04-05 05:43:27.939720 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939728 | orchestrator | 2026-04-05 05:43:27.939737 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:43:27.939745 | orchestrator | Sunday 05 April 2026 05:43:27 +0000 (0:00:01.130) 0:30:04.497 ********** 2026-04-05 05:43:27.939754 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:27.939762 | orchestrator | 2026-04-05 05:43:27.939777 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:43:35.073416 | orchestrator | Sunday 05 April 2026 05:43:28 +0000 (0:00:01.150) 0:30:05.648 ********** 2026-04-05 05:43:35.073528 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:35.073545 | orchestrator | 2026-04-05 05:43:35.073561 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:43:35.073580 | orchestrator | Sunday 05 April 2026 05:43:30 +0000 (0:00:01.165) 0:30:06.814 ********** 2026-04-05 05:43:35.073670 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:35.073689 | orchestrator | 2026-04-05 05:43:35.073708 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:43:35.073727 | orchestrator | Sunday 05 April 2026 05:43:31 +0000 (0:00:01.126) 0:30:07.940 ********** 2026-04-05 05:43:35.073775 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:35.073795 | orchestrator | 2026-04-05 05:43:35.073814 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:43:35.073835 | orchestrator | Sunday 05 April 2026 05:43:32 +0000 (0:00:01.166) 0:30:09.107 ********** 2026-04-05 05:43:35.073854 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:35.073873 | orchestrator | 2026-04-05 05:43:35.073892 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:43:35.073910 | orchestrator | Sunday 05 April 2026 05:43:33 +0000 (0:00:01.221) 0:30:10.329 ********** 2026-04-05 05:43:35.073932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.073950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.073980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.073996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:43:35.074013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.074122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.074140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.074206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e425300', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:43:35.074248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.074269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:43:35.074287 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:35.074307 | orchestrator | 2026-04-05 05:43:35.074326 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:43:35.074338 | orchestrator | Sunday 05 April 2026 05:43:34 +0000 (0:00:01.379) 0:30:11.708 ********** 2026-04-05 05:43:35.074350 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:35.074377 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.983976 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984084 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984114 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984126 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984137 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984169 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e425300', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e425300-a1df-4921-af0a-0d26810bd200-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984209 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984220 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:43:43.984232 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:43.984243 | orchestrator | 2026-04-05 05:43:43.984254 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:43:43.984265 | orchestrator | Sunday 05 April 2026 05:43:36 +0000 (0:00:01.239) 0:30:12.947 ********** 2026-04-05 05:43:43.984274 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:43.984284 | orchestrator | 2026-04-05 05:43:43.984294 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:43:43.984308 | orchestrator | Sunday 05 April 2026 05:43:37 +0000 (0:00:01.578) 0:30:14.526 ********** 2026-04-05 05:43:43.984333 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:43.984344 | orchestrator | 2026-04-05 05:43:43.984353 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:43:43.984363 | orchestrator | Sunday 05 April 2026 05:43:38 +0000 (0:00:01.138) 0:30:15.665 ********** 2026-04-05 05:43:43.984372 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:43:43.984382 | orchestrator | 2026-04-05 05:43:43.984392 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:43:43.984405 | orchestrator | Sunday 05 April 2026 05:43:40 +0000 (0:00:01.520) 0:30:17.185 ********** 2026-04-05 05:43:43.984419 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:43.984429 | orchestrator | 2026-04-05 05:43:43.984439 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:43:43.984448 | orchestrator | Sunday 05 April 2026 05:43:41 +0000 (0:00:01.101) 0:30:18.287 ********** 2026-04-05 05:43:43.984458 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:43.984467 | orchestrator | 2026-04-05 05:43:43.984477 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:43:43.984486 | orchestrator | Sunday 05 April 2026 05:43:42 +0000 (0:00:01.259) 0:30:19.546 ********** 2026-04-05 05:43:43.984496 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:43:43.984505 | orchestrator | 2026-04-05 05:43:43.984515 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:43:43.984531 | orchestrator | Sunday 05 April 2026 05:43:43 +0000 (0:00:01.146) 0:30:20.693 ********** 2026-04-05 05:44:22.494073 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-05 05:44:22.494193 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-05 05:44:22.494210 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:44:22.494223 | orchestrator | 2026-04-05 05:44:22.494253 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:44:22.494277 | orchestrator | Sunday 05 April 2026 05:43:46 +0000 (0:00:02.576) 0:30:23.270 ********** 2026-04-05 05:44:22.494289 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-05 05:44:22.494300 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-05 05:44:22.494311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-05 05:44:22.494321 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.494332 | orchestrator | 2026-04-05 05:44:22.494344 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:44:22.494354 | orchestrator | Sunday 05 April 2026 05:43:47 +0000 (0:00:01.189) 0:30:24.460 ********** 2026-04-05 05:44:22.494365 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.494376 | orchestrator | 2026-04-05 05:44:22.494387 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:44:22.494398 | orchestrator | Sunday 05 April 2026 05:43:48 +0000 (0:00:01.152) 0:30:25.613 ********** 2026-04-05 05:44:22.494409 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:44:22.494420 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:44:22.494431 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:44:22.494441 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:44:22.494452 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:44:22.494505 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:44:22.494516 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:44:22.494526 | orchestrator | 2026-04-05 05:44:22.494537 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:44:22.494563 | orchestrator | Sunday 05 April 2026 05:43:51 +0000 (0:00:02.494) 0:30:28.108 ********** 2026-04-05 05:44:22.494577 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:44:22.494615 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:44:22.494630 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:44:22.494643 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:44:22.494655 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:44:22.494667 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:44:22.494680 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:44:22.494692 | orchestrator | 2026-04-05 05:44:22.494704 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:44:22.494717 | orchestrator | Sunday 05 April 2026 05:43:54 +0000 (0:00:02.707) 0:30:30.815 ********** 2026-04-05 05:44:22.494729 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-05 05:44:22.494743 | orchestrator | 2026-04-05 05:44:22.494755 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:44:22.494767 | orchestrator | Sunday 05 April 2026 05:43:55 +0000 (0:00:01.121) 0:30:31.937 ********** 2026-04-05 05:44:22.494781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-05 05:44:22.494794 | orchestrator | 2026-04-05 05:44:22.494806 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:44:22.494818 | orchestrator | Sunday 05 April 2026 05:43:56 +0000 (0:00:01.101) 0:30:33.038 ********** 2026-04-05 05:44:22.494830 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.494843 | orchestrator | 2026-04-05 05:44:22.494856 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:44:22.494868 | orchestrator | Sunday 05 April 2026 05:43:57 +0000 (0:00:01.563) 0:30:34.601 ********** 2026-04-05 05:44:22.494880 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.494892 | orchestrator | 2026-04-05 05:44:22.494904 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:44:22.494916 | orchestrator | Sunday 05 April 2026 05:43:59 +0000 (0:00:01.124) 0:30:35.726 ********** 2026-04-05 05:44:22.494928 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.494941 | orchestrator | 2026-04-05 05:44:22.494954 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:44:22.494967 | orchestrator | Sunday 05 April 2026 05:44:00 +0000 (0:00:01.139) 0:30:36.866 ********** 2026-04-05 05:44:22.494979 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.494990 | orchestrator | 2026-04-05 05:44:22.495000 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:44:22.495011 | orchestrator | Sunday 05 April 2026 05:44:01 +0000 (0:00:01.099) 0:30:37.965 ********** 2026-04-05 05:44:22.495022 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.495032 | orchestrator | 2026-04-05 05:44:22.495043 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:44:22.495054 | orchestrator | Sunday 05 April 2026 05:44:02 +0000 (0:00:01.529) 0:30:39.495 ********** 2026-04-05 05:44:22.495065 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495075 | orchestrator | 2026-04-05 05:44:22.495086 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:44:22.495114 | orchestrator | Sunday 05 April 2026 05:44:03 +0000 (0:00:01.220) 0:30:40.716 ********** 2026-04-05 05:44:22.495127 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495137 | orchestrator | 2026-04-05 05:44:22.495148 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:44:22.495159 | orchestrator | Sunday 05 April 2026 05:44:05 +0000 (0:00:01.242) 0:30:41.958 ********** 2026-04-05 05:44:22.495170 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.495180 | orchestrator | 2026-04-05 05:44:22.495191 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:44:22.495210 | orchestrator | Sunday 05 April 2026 05:44:06 +0000 (0:00:01.740) 0:30:43.699 ********** 2026-04-05 05:44:22.495221 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.495231 | orchestrator | 2026-04-05 05:44:22.495242 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:44:22.495253 | orchestrator | Sunday 05 April 2026 05:44:08 +0000 (0:00:01.546) 0:30:45.246 ********** 2026-04-05 05:44:22.495264 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495274 | orchestrator | 2026-04-05 05:44:22.495285 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:44:22.495296 | orchestrator | Sunday 05 April 2026 05:44:09 +0000 (0:00:00.797) 0:30:46.044 ********** 2026-04-05 05:44:22.495307 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.495317 | orchestrator | 2026-04-05 05:44:22.495328 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:44:22.495339 | orchestrator | Sunday 05 April 2026 05:44:10 +0000 (0:00:00.840) 0:30:46.884 ********** 2026-04-05 05:44:22.495349 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495360 | orchestrator | 2026-04-05 05:44:22.495371 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:44:22.495381 | orchestrator | Sunday 05 April 2026 05:44:10 +0000 (0:00:00.799) 0:30:47.684 ********** 2026-04-05 05:44:22.495392 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495402 | orchestrator | 2026-04-05 05:44:22.495413 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:44:22.495424 | orchestrator | Sunday 05 April 2026 05:44:11 +0000 (0:00:00.838) 0:30:48.522 ********** 2026-04-05 05:44:22.495434 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495445 | orchestrator | 2026-04-05 05:44:22.495474 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:44:22.495492 | orchestrator | Sunday 05 April 2026 05:44:12 +0000 (0:00:00.769) 0:30:49.292 ********** 2026-04-05 05:44:22.495503 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495513 | orchestrator | 2026-04-05 05:44:22.495524 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:44:22.495535 | orchestrator | Sunday 05 April 2026 05:44:13 +0000 (0:00:00.770) 0:30:50.063 ********** 2026-04-05 05:44:22.495545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495556 | orchestrator | 2026-04-05 05:44:22.495566 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:44:22.495577 | orchestrator | Sunday 05 April 2026 05:44:14 +0000 (0:00:00.827) 0:30:50.891 ********** 2026-04-05 05:44:22.495587 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.495598 | orchestrator | 2026-04-05 05:44:22.495608 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:44:22.495619 | orchestrator | Sunday 05 April 2026 05:44:15 +0000 (0:00:00.827) 0:30:51.718 ********** 2026-04-05 05:44:22.495629 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.495640 | orchestrator | 2026-04-05 05:44:22.495651 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:44:22.495661 | orchestrator | Sunday 05 April 2026 05:44:15 +0000 (0:00:00.847) 0:30:52.566 ********** 2026-04-05 05:44:22.495672 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:44:22.495683 | orchestrator | 2026-04-05 05:44:22.495693 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:44:22.495704 | orchestrator | Sunday 05 April 2026 05:44:16 +0000 (0:00:00.834) 0:30:53.400 ********** 2026-04-05 05:44:22.495715 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495725 | orchestrator | 2026-04-05 05:44:22.495736 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:44:22.495747 | orchestrator | Sunday 05 April 2026 05:44:17 +0000 (0:00:00.920) 0:30:54.321 ********** 2026-04-05 05:44:22.495757 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495768 | orchestrator | 2026-04-05 05:44:22.495779 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:44:22.495796 | orchestrator | Sunday 05 April 2026 05:44:18 +0000 (0:00:00.798) 0:30:55.119 ********** 2026-04-05 05:44:22.495807 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495818 | orchestrator | 2026-04-05 05:44:22.495828 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:44:22.495839 | orchestrator | Sunday 05 April 2026 05:44:19 +0000 (0:00:00.800) 0:30:55.919 ********** 2026-04-05 05:44:22.495850 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495860 | orchestrator | 2026-04-05 05:44:22.495871 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:44:22.495882 | orchestrator | Sunday 05 April 2026 05:44:19 +0000 (0:00:00.791) 0:30:56.711 ********** 2026-04-05 05:44:22.495892 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495903 | orchestrator | 2026-04-05 05:44:22.495913 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:44:22.495924 | orchestrator | Sunday 05 April 2026 05:44:20 +0000 (0:00:00.832) 0:30:57.544 ********** 2026-04-05 05:44:22.495935 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495945 | orchestrator | 2026-04-05 05:44:22.495956 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:44:22.495966 | orchestrator | Sunday 05 April 2026 05:44:21 +0000 (0:00:00.803) 0:30:58.347 ********** 2026-04-05 05:44:22.495977 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:44:22.495988 | orchestrator | 2026-04-05 05:44:22.495998 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:44:22.496009 | orchestrator | Sunday 05 April 2026 05:44:22 +0000 (0:00:00.803) 0:30:59.150 ********** 2026-04-05 05:44:22.496027 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.100493 | orchestrator | 2026-04-05 05:45:09.100619 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:45:09.100647 | orchestrator | Sunday 05 April 2026 05:44:23 +0000 (0:00:00.764) 0:30:59.915 ********** 2026-04-05 05:45:09.100667 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.100686 | orchestrator | 2026-04-05 05:45:09.100704 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:45:09.100724 | orchestrator | Sunday 05 April 2026 05:44:24 +0000 (0:00:00.805) 0:31:00.720 ********** 2026-04-05 05:45:09.100743 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.100762 | orchestrator | 2026-04-05 05:45:09.100781 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:45:09.100800 | orchestrator | Sunday 05 April 2026 05:44:24 +0000 (0:00:00.834) 0:31:01.555 ********** 2026-04-05 05:45:09.100818 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.100836 | orchestrator | 2026-04-05 05:45:09.100856 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:45:09.100875 | orchestrator | Sunday 05 April 2026 05:44:25 +0000 (0:00:00.802) 0:31:02.357 ********** 2026-04-05 05:45:09.100895 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.100912 | orchestrator | 2026-04-05 05:45:09.100931 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:45:09.100948 | orchestrator | Sunday 05 April 2026 05:44:26 +0000 (0:00:00.837) 0:31:03.195 ********** 2026-04-05 05:45:09.100963 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:09.100977 | orchestrator | 2026-04-05 05:45:09.100990 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:45:09.101003 | orchestrator | Sunday 05 April 2026 05:44:28 +0000 (0:00:01.565) 0:31:04.760 ********** 2026-04-05 05:45:09.101016 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:09.101029 | orchestrator | 2026-04-05 05:45:09.101041 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:45:09.101073 | orchestrator | Sunday 05 April 2026 05:44:30 +0000 (0:00:02.303) 0:31:07.064 ********** 2026-04-05 05:45:09.101097 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-05 05:45:09.101136 | orchestrator | 2026-04-05 05:45:09.101150 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:45:09.101164 | orchestrator | Sunday 05 April 2026 05:44:31 +0000 (0:00:01.174) 0:31:08.238 ********** 2026-04-05 05:45:09.101188 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101201 | orchestrator | 2026-04-05 05:45:09.101214 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:45:09.101226 | orchestrator | Sunday 05 April 2026 05:44:32 +0000 (0:00:01.119) 0:31:09.357 ********** 2026-04-05 05:45:09.101239 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101252 | orchestrator | 2026-04-05 05:45:09.101264 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:45:09.101275 | orchestrator | Sunday 05 April 2026 05:44:33 +0000 (0:00:01.144) 0:31:10.501 ********** 2026-04-05 05:45:09.101285 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:45:09.101296 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:45:09.101307 | orchestrator | 2026-04-05 05:45:09.101318 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:45:09.101329 | orchestrator | Sunday 05 April 2026 05:44:35 +0000 (0:00:01.963) 0:31:12.465 ********** 2026-04-05 05:45:09.101362 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:09.101384 | orchestrator | 2026-04-05 05:45:09.101404 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:45:09.101419 | orchestrator | Sunday 05 April 2026 05:44:37 +0000 (0:00:01.511) 0:31:13.977 ********** 2026-04-05 05:45:09.101430 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101441 | orchestrator | 2026-04-05 05:45:09.101452 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:45:09.101463 | orchestrator | Sunday 05 April 2026 05:44:38 +0000 (0:00:01.169) 0:31:15.147 ********** 2026-04-05 05:45:09.101474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101484 | orchestrator | 2026-04-05 05:45:09.101495 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:45:09.101506 | orchestrator | Sunday 05 April 2026 05:44:39 +0000 (0:00:00.787) 0:31:15.935 ********** 2026-04-05 05:45:09.101517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101527 | orchestrator | 2026-04-05 05:45:09.101538 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:45:09.101549 | orchestrator | Sunday 05 April 2026 05:44:40 +0000 (0:00:00.783) 0:31:16.718 ********** 2026-04-05 05:45:09.101559 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-05 05:45:09.101570 | orchestrator | 2026-04-05 05:45:09.101581 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:45:09.101591 | orchestrator | Sunday 05 April 2026 05:44:41 +0000 (0:00:01.160) 0:31:17.879 ********** 2026-04-05 05:45:09.101602 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:09.101613 | orchestrator | 2026-04-05 05:45:09.101624 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:45:09.101635 | orchestrator | Sunday 05 April 2026 05:44:43 +0000 (0:00:02.679) 0:31:20.559 ********** 2026-04-05 05:45:09.101645 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:45:09.101656 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:45:09.101667 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:45:09.101677 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101688 | orchestrator | 2026-04-05 05:45:09.101699 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:45:09.101710 | orchestrator | Sunday 05 April 2026 05:44:45 +0000 (0:00:01.335) 0:31:21.894 ********** 2026-04-05 05:45:09.101740 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101761 | orchestrator | 2026-04-05 05:45:09.101772 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:45:09.101783 | orchestrator | Sunday 05 April 2026 05:44:46 +0000 (0:00:01.132) 0:31:23.028 ********** 2026-04-05 05:45:09.101794 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101804 | orchestrator | 2026-04-05 05:45:09.101815 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:45:09.101826 | orchestrator | Sunday 05 April 2026 05:44:47 +0000 (0:00:01.178) 0:31:24.206 ********** 2026-04-05 05:45:09.101837 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101847 | orchestrator | 2026-04-05 05:45:09.101858 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:45:09.101869 | orchestrator | Sunday 05 April 2026 05:44:48 +0000 (0:00:01.138) 0:31:25.345 ********** 2026-04-05 05:45:09.101879 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101890 | orchestrator | 2026-04-05 05:45:09.101901 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:45:09.101911 | orchestrator | Sunday 05 April 2026 05:44:49 +0000 (0:00:01.175) 0:31:26.520 ********** 2026-04-05 05:45:09.101922 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.101933 | orchestrator | 2026-04-05 05:45:09.101943 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:45:09.101954 | orchestrator | Sunday 05 April 2026 05:44:50 +0000 (0:00:00.906) 0:31:27.427 ********** 2026-04-05 05:45:09.101965 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:09.101976 | orchestrator | 2026-04-05 05:45:09.101986 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:45:09.101997 | orchestrator | Sunday 05 April 2026 05:44:52 +0000 (0:00:02.255) 0:31:29.683 ********** 2026-04-05 05:45:09.102008 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:09.102076 | orchestrator | 2026-04-05 05:45:09.102088 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:45:09.102099 | orchestrator | Sunday 05 April 2026 05:44:53 +0000 (0:00:00.814) 0:31:30.497 ********** 2026-04-05 05:45:09.102132 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-05 05:45:09.102143 | orchestrator | 2026-04-05 05:45:09.102187 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:45:09.102198 | orchestrator | Sunday 05 April 2026 05:44:54 +0000 (0:00:01.147) 0:31:31.645 ********** 2026-04-05 05:45:09.102216 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102227 | orchestrator | 2026-04-05 05:45:09.102238 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:45:09.102249 | orchestrator | Sunday 05 April 2026 05:44:56 +0000 (0:00:01.215) 0:31:32.861 ********** 2026-04-05 05:45:09.102260 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102270 | orchestrator | 2026-04-05 05:45:09.102281 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:45:09.102292 | orchestrator | Sunday 05 April 2026 05:44:57 +0000 (0:00:01.233) 0:31:34.094 ********** 2026-04-05 05:45:09.102303 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102314 | orchestrator | 2026-04-05 05:45:09.102324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:45:09.102335 | orchestrator | Sunday 05 April 2026 05:44:58 +0000 (0:00:01.155) 0:31:35.250 ********** 2026-04-05 05:45:09.102364 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102376 | orchestrator | 2026-04-05 05:45:09.102386 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:45:09.102397 | orchestrator | Sunday 05 April 2026 05:44:59 +0000 (0:00:01.313) 0:31:36.563 ********** 2026-04-05 05:45:09.102408 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102419 | orchestrator | 2026-04-05 05:45:09.102429 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:45:09.102440 | orchestrator | Sunday 05 April 2026 05:45:01 +0000 (0:00:01.203) 0:31:37.766 ********** 2026-04-05 05:45:09.102460 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102471 | orchestrator | 2026-04-05 05:45:09.102481 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:45:09.102492 | orchestrator | Sunday 05 April 2026 05:45:02 +0000 (0:00:01.187) 0:31:38.954 ********** 2026-04-05 05:45:09.102503 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102514 | orchestrator | 2026-04-05 05:45:09.102525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:45:09.102535 | orchestrator | Sunday 05 April 2026 05:45:03 +0000 (0:00:01.166) 0:31:40.120 ********** 2026-04-05 05:45:09.102546 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:09.102557 | orchestrator | 2026-04-05 05:45:09.102568 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:45:09.102579 | orchestrator | Sunday 05 April 2026 05:45:04 +0000 (0:00:01.208) 0:31:41.329 ********** 2026-04-05 05:45:09.102589 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:09.102600 | orchestrator | 2026-04-05 05:45:09.102611 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:45:09.102622 | orchestrator | Sunday 05 April 2026 05:45:05 +0000 (0:00:00.849) 0:31:42.179 ********** 2026-04-05 05:45:09.102633 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-05 05:45:09.102643 | orchestrator | 2026-04-05 05:45:09.102654 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:45:09.102665 | orchestrator | Sunday 05 April 2026 05:45:06 +0000 (0:00:01.106) 0:31:43.285 ********** 2026-04-05 05:45:09.102676 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-05 05:45:09.102687 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-05 05:45:09.102698 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-05 05:45:09.102709 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-05 05:45:09.102719 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-05 05:45:09.102730 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-05 05:45:09.102749 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-05 05:45:49.179089 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:45:49.179201 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:45:49.179217 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:45:49.179229 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:45:49.179241 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:45:49.179301 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:45:49.179313 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:45:49.179325 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-05 05:45:49.179336 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-05 05:45:49.179347 | orchestrator | 2026-04-05 05:45:49.179359 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:45:49.179371 | orchestrator | Sunday 05 April 2026 05:45:13 +0000 (0:00:06.449) 0:31:49.734 ********** 2026-04-05 05:45:49.179382 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179393 | orchestrator | 2026-04-05 05:45:49.179405 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:45:49.179416 | orchestrator | Sunday 05 April 2026 05:45:13 +0000 (0:00:00.846) 0:31:50.581 ********** 2026-04-05 05:45:49.179427 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179438 | orchestrator | 2026-04-05 05:45:49.179449 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:45:49.179460 | orchestrator | Sunday 05 April 2026 05:45:14 +0000 (0:00:00.792) 0:31:51.373 ********** 2026-04-05 05:45:49.179471 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179482 | orchestrator | 2026-04-05 05:45:49.179517 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:45:49.179529 | orchestrator | Sunday 05 April 2026 05:45:15 +0000 (0:00:00.848) 0:31:52.222 ********** 2026-04-05 05:45:49.179540 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179551 | orchestrator | 2026-04-05 05:45:49.179562 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:45:49.179573 | orchestrator | Sunday 05 April 2026 05:45:16 +0000 (0:00:00.985) 0:31:53.208 ********** 2026-04-05 05:45:49.179584 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179594 | orchestrator | 2026-04-05 05:45:49.179618 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:45:49.179630 | orchestrator | Sunday 05 April 2026 05:45:17 +0000 (0:00:00.788) 0:31:53.997 ********** 2026-04-05 05:45:49.179641 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179652 | orchestrator | 2026-04-05 05:45:49.179663 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:45:49.179674 | orchestrator | Sunday 05 April 2026 05:45:18 +0000 (0:00:00.815) 0:31:54.813 ********** 2026-04-05 05:45:49.179685 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179696 | orchestrator | 2026-04-05 05:45:49.179707 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:45:49.179718 | orchestrator | Sunday 05 April 2026 05:45:18 +0000 (0:00:00.797) 0:31:55.611 ********** 2026-04-05 05:45:49.179729 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179739 | orchestrator | 2026-04-05 05:45:49.179750 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:45:49.179761 | orchestrator | Sunday 05 April 2026 05:45:19 +0000 (0:00:00.788) 0:31:56.399 ********** 2026-04-05 05:45:49.179772 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179783 | orchestrator | 2026-04-05 05:45:49.179793 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:45:49.179804 | orchestrator | Sunday 05 April 2026 05:45:20 +0000 (0:00:00.838) 0:31:57.237 ********** 2026-04-05 05:45:49.179815 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179826 | orchestrator | 2026-04-05 05:45:49.179837 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:45:49.179847 | orchestrator | Sunday 05 April 2026 05:45:21 +0000 (0:00:00.819) 0:31:58.056 ********** 2026-04-05 05:45:49.179858 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179869 | orchestrator | 2026-04-05 05:45:49.179880 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:45:49.179890 | orchestrator | Sunday 05 April 2026 05:45:22 +0000 (0:00:00.801) 0:31:58.858 ********** 2026-04-05 05:45:49.179901 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179912 | orchestrator | 2026-04-05 05:45:49.179922 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:45:49.179933 | orchestrator | Sunday 05 April 2026 05:45:22 +0000 (0:00:00.785) 0:31:59.644 ********** 2026-04-05 05:45:49.179944 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179955 | orchestrator | 2026-04-05 05:45:49.179966 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:45:49.179976 | orchestrator | Sunday 05 April 2026 05:45:23 +0000 (0:00:00.890) 0:32:00.535 ********** 2026-04-05 05:45:49.179987 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.179999 | orchestrator | 2026-04-05 05:45:49.180010 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:45:49.180021 | orchestrator | Sunday 05 April 2026 05:45:24 +0000 (0:00:00.790) 0:32:01.325 ********** 2026-04-05 05:45:49.180031 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180042 | orchestrator | 2026-04-05 05:45:49.180053 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:45:49.180064 | orchestrator | Sunday 05 April 2026 05:45:25 +0000 (0:00:00.943) 0:32:02.268 ********** 2026-04-05 05:45:49.180085 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180096 | orchestrator | 2026-04-05 05:45:49.180107 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:45:49.180118 | orchestrator | Sunday 05 April 2026 05:45:26 +0000 (0:00:00.809) 0:32:03.077 ********** 2026-04-05 05:45:49.180146 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180158 | orchestrator | 2026-04-05 05:45:49.180169 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:45:49.180182 | orchestrator | Sunday 05 April 2026 05:45:27 +0000 (0:00:00.807) 0:32:03.885 ********** 2026-04-05 05:45:49.180192 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180203 | orchestrator | 2026-04-05 05:45:49.180214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:45:49.180225 | orchestrator | Sunday 05 April 2026 05:45:28 +0000 (0:00:00.967) 0:32:04.852 ********** 2026-04-05 05:45:49.180236 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180246 | orchestrator | 2026-04-05 05:45:49.180280 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:45:49.180291 | orchestrator | Sunday 05 April 2026 05:45:28 +0000 (0:00:00.791) 0:32:05.644 ********** 2026-04-05 05:45:49.180302 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180313 | orchestrator | 2026-04-05 05:45:49.180323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:45:49.180334 | orchestrator | Sunday 05 April 2026 05:45:29 +0000 (0:00:00.814) 0:32:06.459 ********** 2026-04-05 05:45:49.180345 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180355 | orchestrator | 2026-04-05 05:45:49.180366 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:45:49.180377 | orchestrator | Sunday 05 April 2026 05:45:30 +0000 (0:00:00.812) 0:32:07.271 ********** 2026-04-05 05:45:49.180388 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:45:49.180398 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:45:49.180409 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:45:49.180420 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180430 | orchestrator | 2026-04-05 05:45:49.180441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:45:49.180452 | orchestrator | Sunday 05 April 2026 05:45:31 +0000 (0:00:01.096) 0:32:08.368 ********** 2026-04-05 05:45:49.180463 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:45:49.180474 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:45:49.180490 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:45:49.180501 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180513 | orchestrator | 2026-04-05 05:45:49.180532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:45:49.180551 | orchestrator | Sunday 05 April 2026 05:45:32 +0000 (0:00:01.037) 0:32:09.406 ********** 2026-04-05 05:45:49.180568 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-05 05:45:49.180585 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-05 05:45:49.180603 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-05 05:45:49.180621 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180639 | orchestrator | 2026-04-05 05:45:49.180657 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:45:49.180675 | orchestrator | Sunday 05 April 2026 05:45:33 +0000 (0:00:01.108) 0:32:10.514 ********** 2026-04-05 05:45:49.180692 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180710 | orchestrator | 2026-04-05 05:45:49.180728 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:45:49.180747 | orchestrator | Sunday 05 April 2026 05:45:34 +0000 (0:00:00.825) 0:32:11.340 ********** 2026-04-05 05:45:49.180779 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-05 05:45:49.180792 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.180802 | orchestrator | 2026-04-05 05:45:49.180813 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:45:49.180824 | orchestrator | Sunday 05 April 2026 05:45:35 +0000 (0:00:00.950) 0:32:12.290 ********** 2026-04-05 05:45:49.180835 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:49.180846 | orchestrator | 2026-04-05 05:45:49.180856 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:45:49.180867 | orchestrator | Sunday 05 April 2026 05:45:36 +0000 (0:00:01.396) 0:32:13.687 ********** 2026-04-05 05:45:49.180878 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:45:49.180889 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:45:49.180900 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-05 05:45:49.180911 | orchestrator | 2026-04-05 05:45:49.180922 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-05 05:45:49.180932 | orchestrator | Sunday 05 April 2026 05:45:38 +0000 (0:00:01.951) 0:32:15.639 ********** 2026-04-05 05:45:49.180943 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-04-05 05:45:49.180953 | orchestrator | 2026-04-05 05:45:49.180964 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-05 05:45:49.180974 | orchestrator | Sunday 05 April 2026 05:45:40 +0000 (0:00:01.330) 0:32:16.969 ********** 2026-04-05 05:45:49.180992 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:45:49.181010 | orchestrator | 2026-04-05 05:45:49.181030 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-05 05:45:49.181048 | orchestrator | Sunday 05 April 2026 05:45:41 +0000 (0:00:01.560) 0:32:18.530 ********** 2026-04-05 05:45:49.181066 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:45:49.181077 | orchestrator | 2026-04-05 05:45:49.181088 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-05 05:45:49.181099 | orchestrator | Sunday 05 April 2026 05:45:43 +0000 (0:00:01.252) 0:32:19.783 ********** 2026-04-05 05:45:49.181109 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:45:49.181120 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:45:49.181141 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:46:36.961547 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-04-05 05:46:36.961663 | orchestrator | 2026-04-05 05:46:36.961745 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-05 05:46:36.961758 | orchestrator | Sunday 05 April 2026 05:45:50 +0000 (0:00:07.266) 0:32:27.049 ********** 2026-04-05 05:46:36.961769 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:46:36.961782 | orchestrator | 2026-04-05 05:46:36.961793 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-05 05:46:36.961804 | orchestrator | Sunday 05 April 2026 05:45:51 +0000 (0:00:01.165) 0:32:28.215 ********** 2026-04-05 05:46:36.961815 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 05:46:36.961826 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-05 05:46:36.961837 | orchestrator | 2026-04-05 05:46:36.961848 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-05 05:46:36.961858 | orchestrator | Sunday 05 April 2026 05:45:54 +0000 (0:00:03.176) 0:32:31.392 ********** 2026-04-05 05:46:36.961869 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 05:46:36.961880 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-05 05:46:36.961891 | orchestrator | 2026-04-05 05:46:36.961901 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-05 05:46:36.961912 | orchestrator | Sunday 05 April 2026 05:45:56 +0000 (0:00:01.984) 0:32:33.376 ********** 2026-04-05 05:46:36.961923 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:46:36.961960 | orchestrator | 2026-04-05 05:46:36.961972 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-05 05:46:36.961982 | orchestrator | Sunday 05 April 2026 05:45:58 +0000 (0:00:01.472) 0:32:34.849 ********** 2026-04-05 05:46:36.961993 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:46:36.962003 | orchestrator | 2026-04-05 05:46:36.962014 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-05 05:46:36.962094 | orchestrator | Sunday 05 April 2026 05:45:58 +0000 (0:00:00.826) 0:32:35.675 ********** 2026-04-05 05:46:36.962144 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:46:36.962184 | orchestrator | 2026-04-05 05:46:36.962198 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-05 05:46:36.962210 | orchestrator | Sunday 05 April 2026 05:45:59 +0000 (0:00:00.798) 0:32:36.474 ********** 2026-04-05 05:46:36.962236 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-04-05 05:46:36.962249 | orchestrator | 2026-04-05 05:46:36.962262 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-05 05:46:36.962274 | orchestrator | Sunday 05 April 2026 05:46:00 +0000 (0:00:01.221) 0:32:37.695 ********** 2026-04-05 05:46:36.962287 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:46:36.962299 | orchestrator | 2026-04-05 05:46:36.962311 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-05 05:46:36.962323 | orchestrator | Sunday 05 April 2026 05:46:02 +0000 (0:00:01.215) 0:32:38.911 ********** 2026-04-05 05:46:36.962336 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:46:36.962348 | orchestrator | 2026-04-05 05:46:36.962360 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-05 05:46:36.962373 | orchestrator | Sunday 05 April 2026 05:46:03 +0000 (0:00:01.155) 0:32:40.066 ********** 2026-04-05 05:46:36.962384 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-04-05 05:46:36.962397 | orchestrator | 2026-04-05 05:46:36.962409 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-05 05:46:36.962421 | orchestrator | Sunday 05 April 2026 05:46:04 +0000 (0:00:01.122) 0:32:41.189 ********** 2026-04-05 05:46:36.962434 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:46:36.962447 | orchestrator | 2026-04-05 05:46:36.962460 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-05 05:46:36.962471 | orchestrator | Sunday 05 April 2026 05:46:06 +0000 (0:00:02.083) 0:32:43.272 ********** 2026-04-05 05:46:36.962482 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:46:36.962493 | orchestrator | 2026-04-05 05:46:36.962503 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-05 05:46:36.962514 | orchestrator | Sunday 05 April 2026 05:46:08 +0000 (0:00:02.026) 0:32:45.299 ********** 2026-04-05 05:46:36.962525 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:46:36.962536 | orchestrator | 2026-04-05 05:46:36.962546 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-05 05:46:36.962557 | orchestrator | Sunday 05 April 2026 05:46:11 +0000 (0:00:02.509) 0:32:47.808 ********** 2026-04-05 05:46:36.962567 | orchestrator | changed: [testbed-node-2] 2026-04-05 05:46:36.962578 | orchestrator | 2026-04-05 05:46:36.962589 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-05 05:46:36.962599 | orchestrator | Sunday 05 April 2026 05:46:14 +0000 (0:00:03.582) 0:32:51.391 ********** 2026-04-05 05:46:36.962610 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-05 05:46:36.962621 | orchestrator | 2026-04-05 05:46:36.962631 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-05 05:46:36.962642 | orchestrator | Sunday 05 April 2026 05:46:16 +0000 (0:00:01.569) 0:32:52.960 ********** 2026-04-05 05:46:36.962653 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:46:36.962663 | orchestrator | 2026-04-05 05:46:36.962674 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-05 05:46:36.962694 | orchestrator | Sunday 05 April 2026 05:46:18 +0000 (0:00:02.476) 0:32:55.436 ********** 2026-04-05 05:46:36.962705 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:46:36.962716 | orchestrator | 2026-04-05 05:46:36.962727 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-05 05:46:36.962737 | orchestrator | Sunday 05 April 2026 05:46:21 +0000 (0:00:02.396) 0:32:57.832 ********** 2026-04-05 05:46:36.962748 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:46:36.962758 | orchestrator | 2026-04-05 05:46:36.962769 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-05 05:46:36.962800 | orchestrator | Sunday 05 April 2026 05:46:22 +0000 (0:00:01.372) 0:32:59.206 ********** 2026-04-05 05:46:36.962812 | orchestrator | ok: [testbed-node-2] 2026-04-05 05:46:36.962823 | orchestrator | 2026-04-05 05:46:36.962833 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-05 05:46:36.962844 | orchestrator | Sunday 05 April 2026 05:46:23 +0000 (0:00:01.283) 0:33:00.490 ********** 2026-04-05 05:46:36.962855 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-05 05:46:36.962866 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-05 05:46:36.962876 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:46:36.962887 | orchestrator | 2026-04-05 05:46:36.962898 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-05 05:46:36.962908 | orchestrator | Sunday 05 April 2026 05:46:25 +0000 (0:00:01.438) 0:33:01.928 ********** 2026-04-05 05:46:36.962919 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-05 05:46:36.962929 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-05 05:46:36.962940 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-05 05:46:36.962951 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-05 05:46:36.962961 | orchestrator | skipping: [testbed-node-2] 2026-04-05 05:46:36.962972 | orchestrator | 2026-04-05 05:46:36.962983 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-04-05 05:46:36.962993 | orchestrator | 2026-04-05 05:46:36.963004 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:46:36.963015 | orchestrator | Sunday 05 April 2026 05:46:27 +0000 (0:00:01.893) 0:33:03.822 ********** 2026-04-05 05:46:36.963025 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:46:36.963036 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:46:36.963047 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:46:36.963058 | orchestrator | 2026-04-05 05:46:36.963068 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:46:36.963079 | orchestrator | Sunday 05 April 2026 05:46:28 +0000 (0:00:01.698) 0:33:05.521 ********** 2026-04-05 05:46:36.963090 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:46:36.963100 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:46:36.963111 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:46:36.963121 | orchestrator | 2026-04-05 05:46:36.963132 | orchestrator | TASK [Get pool list] *********************************************************** 2026-04-05 05:46:36.963164 | orchestrator | Sunday 05 April 2026 05:46:30 +0000 (0:00:01.687) 0:33:07.209 ********** 2026-04-05 05:46:36.963176 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:46:36.963186 | orchestrator | 2026-04-05 05:46:36.963197 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-04-05 05:46:36.963208 | orchestrator | Sunday 05 April 2026 05:46:33 +0000 (0:00:03.173) 0:33:10.382 ********** 2026-04-05 05:46:36.963219 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:46:36.963229 | orchestrator | 2026-04-05 05:46:36.963240 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-04-05 05:46:36.963250 | orchestrator | Sunday 05 April 2026 05:46:36 +0000 (0:00:02.984) 0:33:13.367 ********** 2026-04-05 05:46:36.963268 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-04-05T03:06:55.765946+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:36.963308 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-04-05T03:08:10.918797+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:37.402468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-04-05T03:08:15.105843+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:37.402630 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-04-05T03:09:15.911931+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:37.402650 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-04-05T03:09:21.492761+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:37.402678 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-04-05T03:09:27.860528+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:37.402701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-04-05T03:09:34.236493+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '197', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:38.223409 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-04-05T03:09:40.487257+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:38.223539 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-04-05T03:09:53.079916+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:38.223586 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-04-05T03:10:42.189014+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '107', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 107, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:38.223617 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-04-05T03:10:51.303276+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '116', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 116, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:46:38.223645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-04-05T03:11:00.313564+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '207', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 207, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:48:13.973032 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-04-05T03:11:08.961542+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '133', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 133, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:48:13.973167 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-04-05T03:11:18.688402+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '143', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 143, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-05 05:48:13.973213 | orchestrator | 2026-04-05 05:48:13.973229 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-04-05 05:48:13.973258 | orchestrator | Sunday 05 April 2026 05:46:39 +0000 (0:00:02.905) 0:33:16.272 ********** 2026-04-05 05:48:13.973271 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:48:13.973281 | orchestrator | 2026-04-05 05:48:13.973292 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-04-05 05:48:13.973303 | orchestrator | Sunday 05 April 2026 05:46:42 +0000 (0:00:02.956) 0:33:19.229 ********** 2026-04-05 05:48:13.973314 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-05 05:48:13.973326 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-05 05:48:13.973337 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-05 05:48:13.973348 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-05 05:48:13.973360 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-05 05:48:13.973371 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-05 05:48:13.973381 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-05 05:48:13.973392 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-05 05:48:13.973403 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-05 05:48:13.973413 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-05 05:48:13.973424 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-05 05:48:13.973434 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-05 05:48:13.973445 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-05 05:48:13.973456 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-05 05:48:13.973466 | orchestrator | 2026-04-05 05:48:13.973477 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-04-05 05:48:13.973487 | orchestrator | Sunday 05 April 2026 05:47:57 +0000 (0:01:14.930) 0:34:34.159 ********** 2026-04-05 05:48:13.973498 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-05 05:48:13.973509 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-05 05:48:13.973519 | orchestrator | 2026-04-05 05:48:13.973531 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-05 05:48:13.973544 | orchestrator | 2026-04-05 05:48:13.973557 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:48:13.973569 | orchestrator | Sunday 05 April 2026 05:48:03 +0000 (0:00:06.503) 0:34:40.663 ********** 2026-04-05 05:48:13.973582 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-05 05:48:13.973595 | orchestrator | 2026-04-05 05:48:13.973608 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:48:13.973621 | orchestrator | Sunday 05 April 2026 05:48:05 +0000 (0:00:01.117) 0:34:41.781 ********** 2026-04-05 05:48:13.973633 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:13.973646 | orchestrator | 2026-04-05 05:48:13.973658 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:48:13.973671 | orchestrator | Sunday 05 April 2026 05:48:06 +0000 (0:00:01.513) 0:34:43.295 ********** 2026-04-05 05:48:13.973692 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:13.973705 | orchestrator | 2026-04-05 05:48:13.973717 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:48:13.973729 | orchestrator | Sunday 05 April 2026 05:48:07 +0000 (0:00:01.123) 0:34:44.419 ********** 2026-04-05 05:48:13.973742 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:13.973755 | orchestrator | 2026-04-05 05:48:13.973767 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:48:13.973779 | orchestrator | Sunday 05 April 2026 05:48:09 +0000 (0:00:01.460) 0:34:45.879 ********** 2026-04-05 05:48:13.973792 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:13.973804 | orchestrator | 2026-04-05 05:48:13.973815 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:48:13.973826 | orchestrator | Sunday 05 April 2026 05:48:10 +0000 (0:00:01.199) 0:34:47.079 ********** 2026-04-05 05:48:13.973836 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:13.973847 | orchestrator | 2026-04-05 05:48:13.973857 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:48:13.973868 | orchestrator | Sunday 05 April 2026 05:48:11 +0000 (0:00:01.129) 0:34:48.208 ********** 2026-04-05 05:48:13.973879 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:13.973889 | orchestrator | 2026-04-05 05:48:13.973900 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:48:13.973916 | orchestrator | Sunday 05 April 2026 05:48:12 +0000 (0:00:01.174) 0:34:49.383 ********** 2026-04-05 05:48:13.973927 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:13.973938 | orchestrator | 2026-04-05 05:48:13.973949 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:48:13.973959 | orchestrator | Sunday 05 April 2026 05:48:13 +0000 (0:00:01.140) 0:34:50.524 ********** 2026-04-05 05:48:13.973989 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:13.974000 | orchestrator | 2026-04-05 05:48:13.974072 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:48:39.183919 | orchestrator | Sunday 05 April 2026 05:48:14 +0000 (0:00:01.144) 0:34:51.668 ********** 2026-04-05 05:48:39.184050 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:48:39.184060 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:48:39.184067 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:48:39.184073 | orchestrator | 2026-04-05 05:48:39.184080 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:48:39.184086 | orchestrator | Sunday 05 April 2026 05:48:17 +0000 (0:00:02.074) 0:34:53.743 ********** 2026-04-05 05:48:39.184092 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:39.184098 | orchestrator | 2026-04-05 05:48:39.184105 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:48:39.184110 | orchestrator | Sunday 05 April 2026 05:48:18 +0000 (0:00:01.270) 0:34:55.014 ********** 2026-04-05 05:48:39.184116 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:48:39.184122 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:48:39.184128 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:48:39.184133 | orchestrator | 2026-04-05 05:48:39.184139 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:48:39.184145 | orchestrator | Sunday 05 April 2026 05:48:21 +0000 (0:00:03.282) 0:34:58.296 ********** 2026-04-05 05:48:39.184151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 05:48:39.184157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 05:48:39.184163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 05:48:39.184169 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184175 | orchestrator | 2026-04-05 05:48:39.184201 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:48:39.184207 | orchestrator | Sunday 05 April 2026 05:48:22 +0000 (0:00:01.403) 0:34:59.700 ********** 2026-04-05 05:48:39.184215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:48:39.184223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:48:39.184230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:48:39.184235 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184241 | orchestrator | 2026-04-05 05:48:39.184247 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:48:39.184253 | orchestrator | Sunday 05 April 2026 05:48:25 +0000 (0:00:02.158) 0:35:01.859 ********** 2026-04-05 05:48:39.184261 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:39.184269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:39.184286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:39.184292 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184298 | orchestrator | 2026-04-05 05:48:39.184304 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:48:39.184310 | orchestrator | Sunday 05 April 2026 05:48:26 +0000 (0:00:01.160) 0:35:03.019 ********** 2026-04-05 05:48:39.184330 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:48:18.823840', 'end': '2026-04-05 05:48:18.871332', 'delta': '0:00:00.047492', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:48:39.184338 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:48:19.761116', 'end': '2026-04-05 05:48:19.822691', 'delta': '0:00:00.061575', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:48:39.184350 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:48:20.351129', 'end': '2026-04-05 05:48:20.400128', 'delta': '0:00:00.048999', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:48:39.184356 | orchestrator | 2026-04-05 05:48:39.184362 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:48:39.184368 | orchestrator | Sunday 05 April 2026 05:48:27 +0000 (0:00:01.310) 0:35:04.329 ********** 2026-04-05 05:48:39.184374 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:39.184380 | orchestrator | 2026-04-05 05:48:39.184386 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:48:39.184391 | orchestrator | Sunday 05 April 2026 05:48:28 +0000 (0:00:01.230) 0:35:05.560 ********** 2026-04-05 05:48:39.184397 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184403 | orchestrator | 2026-04-05 05:48:39.184409 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:48:39.184414 | orchestrator | Sunday 05 April 2026 05:48:30 +0000 (0:00:01.265) 0:35:06.825 ********** 2026-04-05 05:48:39.184421 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:39.184426 | orchestrator | 2026-04-05 05:48:39.184432 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:48:39.184438 | orchestrator | Sunday 05 April 2026 05:48:31 +0000 (0:00:01.178) 0:35:08.004 ********** 2026-04-05 05:48:39.184444 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:48:39.184449 | orchestrator | 2026-04-05 05:48:39.184455 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:48:39.184461 | orchestrator | Sunday 05 April 2026 05:48:33 +0000 (0:00:01.961) 0:35:09.966 ********** 2026-04-05 05:48:39.184467 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:39.184472 | orchestrator | 2026-04-05 05:48:39.184478 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:48:39.184484 | orchestrator | Sunday 05 April 2026 05:48:34 +0000 (0:00:01.122) 0:35:11.089 ********** 2026-04-05 05:48:39.184491 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184498 | orchestrator | 2026-04-05 05:48:39.184505 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:48:39.184512 | orchestrator | Sunday 05 April 2026 05:48:35 +0000 (0:00:01.209) 0:35:12.298 ********** 2026-04-05 05:48:39.184518 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184525 | orchestrator | 2026-04-05 05:48:39.184532 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:48:39.184538 | orchestrator | Sunday 05 April 2026 05:48:36 +0000 (0:00:01.223) 0:35:13.521 ********** 2026-04-05 05:48:39.184549 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184556 | orchestrator | 2026-04-05 05:48:39.184563 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:48:39.184569 | orchestrator | Sunday 05 April 2026 05:48:37 +0000 (0:00:01.146) 0:35:14.668 ********** 2026-04-05 05:48:39.184580 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:39.184587 | orchestrator | 2026-04-05 05:48:39.184594 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:48:39.184601 | orchestrator | Sunday 05 April 2026 05:48:39 +0000 (0:00:01.116) 0:35:15.784 ********** 2026-04-05 05:48:39.184611 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:45.190915 | orchestrator | 2026-04-05 05:48:45.191082 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:48:45.191101 | orchestrator | Sunday 05 April 2026 05:48:40 +0000 (0:00:01.174) 0:35:16.959 ********** 2026-04-05 05:48:45.191114 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:45.191126 | orchestrator | 2026-04-05 05:48:45.191138 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:48:45.191149 | orchestrator | Sunday 05 April 2026 05:48:41 +0000 (0:00:01.117) 0:35:18.076 ********** 2026-04-05 05:48:45.191160 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:45.191173 | orchestrator | 2026-04-05 05:48:45.191183 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:48:45.191194 | orchestrator | Sunday 05 April 2026 05:48:42 +0000 (0:00:01.162) 0:35:19.238 ********** 2026-04-05 05:48:45.191205 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:45.191216 | orchestrator | 2026-04-05 05:48:45.191227 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:48:45.191239 | orchestrator | Sunday 05 April 2026 05:48:43 +0000 (0:00:01.251) 0:35:20.490 ********** 2026-04-05 05:48:45.191250 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:48:45.191275 | orchestrator | 2026-04-05 05:48:45.191287 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:48:45.191309 | orchestrator | Sunday 05 April 2026 05:48:44 +0000 (0:00:01.171) 0:35:21.661 ********** 2026-04-05 05:48:45.191323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:45.191341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}})  2026-04-05 05:48:45.191356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:48:45.191369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}})  2026-04-05 05:48:45.191425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:45.191457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:45.191472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:48:45.191486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:45.191501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:48:45.191515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:45.191528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}})  2026-04-05 05:48:45.191556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}})  2026-04-05 05:48:45.191577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:46.555469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:48:46.555579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:46.555598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:48:46.555650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:48:46.555665 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:48:46.555678 | orchestrator | 2026-04-05 05:48:46.555690 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:48:46.555702 | orchestrator | Sunday 05 April 2026 05:48:46 +0000 (0:00:01.414) 0:35:23.076 ********** 2026-04-05 05:48:46.555733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.555748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.555761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.555775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.555800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.555819 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691276 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:48:46.691453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:49:26.242091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:49:26.242191 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242206 | orchestrator | 2026-04-05 05:49:26.242215 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:49:26.242225 | orchestrator | Sunday 05 April 2026 05:48:47 +0000 (0:00:01.500) 0:35:24.576 ********** 2026-04-05 05:49:26.242233 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.242242 | orchestrator | 2026-04-05 05:49:26.242251 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:49:26.242281 | orchestrator | Sunday 05 April 2026 05:48:49 +0000 (0:00:01.554) 0:35:26.131 ********** 2026-04-05 05:49:26.242290 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.242298 | orchestrator | 2026-04-05 05:49:26.242306 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:49:26.242314 | orchestrator | Sunday 05 April 2026 05:48:50 +0000 (0:00:01.182) 0:35:27.313 ********** 2026-04-05 05:49:26.242322 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.242330 | orchestrator | 2026-04-05 05:49:26.242338 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:49:26.242346 | orchestrator | Sunday 05 April 2026 05:48:52 +0000 (0:00:01.488) 0:35:28.802 ********** 2026-04-05 05:49:26.242354 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242362 | orchestrator | 2026-04-05 05:49:26.242370 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:49:26.242378 | orchestrator | Sunday 05 April 2026 05:48:53 +0000 (0:00:01.145) 0:35:29.947 ********** 2026-04-05 05:49:26.242386 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242393 | orchestrator | 2026-04-05 05:49:26.242401 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:49:26.242410 | orchestrator | Sunday 05 April 2026 05:48:54 +0000 (0:00:01.246) 0:35:31.194 ********** 2026-04-05 05:49:26.242417 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242425 | orchestrator | 2026-04-05 05:49:26.242433 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:49:26.242441 | orchestrator | Sunday 05 April 2026 05:48:55 +0000 (0:00:01.209) 0:35:32.403 ********** 2026-04-05 05:49:26.242449 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 05:49:26.242457 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 05:49:26.242465 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 05:49:26.242473 | orchestrator | 2026-04-05 05:49:26.242481 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:49:26.242489 | orchestrator | Sunday 05 April 2026 05:48:57 +0000 (0:00:02.146) 0:35:34.550 ********** 2026-04-05 05:49:26.242497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 05:49:26.242505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 05:49:26.242513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 05:49:26.242521 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242528 | orchestrator | 2026-04-05 05:49:26.242536 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:49:26.242556 | orchestrator | Sunday 05 April 2026 05:48:59 +0000 (0:00:01.195) 0:35:35.746 ********** 2026-04-05 05:49:26.242565 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-05 05:49:26.242575 | orchestrator | 2026-04-05 05:49:26.242585 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:49:26.242597 | orchestrator | Sunday 05 April 2026 05:49:00 +0000 (0:00:01.319) 0:35:37.065 ********** 2026-04-05 05:49:26.242607 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242616 | orchestrator | 2026-04-05 05:49:26.242625 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:49:26.242635 | orchestrator | Sunday 05 April 2026 05:49:01 +0000 (0:00:01.174) 0:35:38.240 ********** 2026-04-05 05:49:26.242644 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242653 | orchestrator | 2026-04-05 05:49:26.242662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:49:26.242672 | orchestrator | Sunday 05 April 2026 05:49:02 +0000 (0:00:01.166) 0:35:39.408 ********** 2026-04-05 05:49:26.242681 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242690 | orchestrator | 2026-04-05 05:49:26.242699 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:49:26.242714 | orchestrator | Sunday 05 April 2026 05:49:03 +0000 (0:00:01.148) 0:35:40.557 ********** 2026-04-05 05:49:26.242723 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.242732 | orchestrator | 2026-04-05 05:49:26.242741 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:49:26.242751 | orchestrator | Sunday 05 April 2026 05:49:05 +0000 (0:00:01.258) 0:35:41.816 ********** 2026-04-05 05:49:26.242760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:49:26.242785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:49:26.242794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:49:26.242804 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242813 | orchestrator | 2026-04-05 05:49:26.242822 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:49:26.242831 | orchestrator | Sunday 05 April 2026 05:49:06 +0000 (0:00:01.482) 0:35:43.298 ********** 2026-04-05 05:49:26.242840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:49:26.242850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:49:26.242884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:49:26.242898 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242912 | orchestrator | 2026-04-05 05:49:26.242925 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:49:26.242938 | orchestrator | Sunday 05 April 2026 05:49:08 +0000 (0:00:01.451) 0:35:44.750 ********** 2026-04-05 05:49:26.242946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:49:26.242954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:49:26.242962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:49:26.242969 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:49:26.242977 | orchestrator | 2026-04-05 05:49:26.242985 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:49:26.242993 | orchestrator | Sunday 05 April 2026 05:49:09 +0000 (0:00:01.374) 0:35:46.124 ********** 2026-04-05 05:49:26.243001 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.243009 | orchestrator | 2026-04-05 05:49:26.243017 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:49:26.243025 | orchestrator | Sunday 05 April 2026 05:49:10 +0000 (0:00:01.152) 0:35:47.276 ********** 2026-04-05 05:49:26.243033 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 05:49:26.243040 | orchestrator | 2026-04-05 05:49:26.243048 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:49:26.243056 | orchestrator | Sunday 05 April 2026 05:49:11 +0000 (0:00:01.340) 0:35:48.619 ********** 2026-04-05 05:49:26.243064 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:49:26.243072 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:49:26.243079 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:49:26.243087 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 05:49:26.243095 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:49:26.243103 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:49:26.243110 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:49:26.243118 | orchestrator | 2026-04-05 05:49:26.243126 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:49:26.243134 | orchestrator | Sunday 05 April 2026 05:49:14 +0000 (0:00:02.289) 0:35:50.908 ********** 2026-04-05 05:49:26.243142 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:49:26.243149 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:49:26.243165 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:49:26.243173 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 05:49:26.243180 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:49:26.243188 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:49:26.243201 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:49:26.243209 | orchestrator | 2026-04-05 05:49:26.243216 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-05 05:49:26.243224 | orchestrator | Sunday 05 April 2026 05:49:16 +0000 (0:00:02.727) 0:35:53.636 ********** 2026-04-05 05:49:26.243232 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.243240 | orchestrator | 2026-04-05 05:49:26.243248 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-05 05:49:26.243255 | orchestrator | Sunday 05 April 2026 05:49:18 +0000 (0:00:01.543) 0:35:55.180 ********** 2026-04-05 05:49:26.243263 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.243271 | orchestrator | 2026-04-05 05:49:26.243279 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-05 05:49:26.243287 | orchestrator | Sunday 05 April 2026 05:49:19 +0000 (0:00:01.145) 0:35:56.325 ********** 2026-04-05 05:49:26.243294 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:49:26.243302 | orchestrator | 2026-04-05 05:49:26.243310 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-05 05:49:26.243318 | orchestrator | Sunday 05 April 2026 05:49:20 +0000 (0:00:01.297) 0:35:57.623 ********** 2026-04-05 05:49:26.243325 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-05 05:49:26.243333 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 05:49:26.243341 | orchestrator | 2026-04-05 05:49:26.243349 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:49:26.243357 | orchestrator | Sunday 05 April 2026 05:49:25 +0000 (0:00:04.190) 0:36:01.814 ********** 2026-04-05 05:49:26.243364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-05 05:49:26.243372 | orchestrator | 2026-04-05 05:49:26.243380 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:49:26.243394 | orchestrator | Sunday 05 April 2026 05:49:26 +0000 (0:00:01.133) 0:36:02.947 ********** 2026-04-05 05:50:17.545467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-05 05:50:17.545583 | orchestrator | 2026-04-05 05:50:17.545600 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:50:17.545612 | orchestrator | Sunday 05 April 2026 05:49:27 +0000 (0:00:01.138) 0:36:04.085 ********** 2026-04-05 05:50:17.545623 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.545635 | orchestrator | 2026-04-05 05:50:17.545646 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:50:17.545657 | orchestrator | Sunday 05 April 2026 05:49:28 +0000 (0:00:01.114) 0:36:05.200 ********** 2026-04-05 05:50:17.545668 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.545679 | orchestrator | 2026-04-05 05:50:17.545690 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:50:17.545701 | orchestrator | Sunday 05 April 2026 05:49:30 +0000 (0:00:01.538) 0:36:06.739 ********** 2026-04-05 05:50:17.545712 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.545723 | orchestrator | 2026-04-05 05:50:17.545734 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:50:17.545744 | orchestrator | Sunday 05 April 2026 05:49:31 +0000 (0:00:01.547) 0:36:08.286 ********** 2026-04-05 05:50:17.545755 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.545766 | orchestrator | 2026-04-05 05:50:17.545777 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:50:17.545879 | orchestrator | Sunday 05 April 2026 05:49:33 +0000 (0:00:01.533) 0:36:09.820 ********** 2026-04-05 05:50:17.545892 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.545903 | orchestrator | 2026-04-05 05:50:17.545913 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:50:17.545924 | orchestrator | Sunday 05 April 2026 05:49:34 +0000 (0:00:01.122) 0:36:10.943 ********** 2026-04-05 05:50:17.545935 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.545945 | orchestrator | 2026-04-05 05:50:17.545956 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:50:17.545967 | orchestrator | Sunday 05 April 2026 05:49:35 +0000 (0:00:01.248) 0:36:12.191 ********** 2026-04-05 05:50:17.545977 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.545988 | orchestrator | 2026-04-05 05:50:17.545998 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:50:17.546009 | orchestrator | Sunday 05 April 2026 05:49:36 +0000 (0:00:01.116) 0:36:13.308 ********** 2026-04-05 05:50:17.546077 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.546089 | orchestrator | 2026-04-05 05:50:17.546099 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:50:17.546111 | orchestrator | Sunday 05 April 2026 05:49:38 +0000 (0:00:01.588) 0:36:14.896 ********** 2026-04-05 05:50:17.546122 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.546132 | orchestrator | 2026-04-05 05:50:17.546143 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:50:17.546153 | orchestrator | Sunday 05 April 2026 05:49:39 +0000 (0:00:01.611) 0:36:16.508 ********** 2026-04-05 05:50:17.546164 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546174 | orchestrator | 2026-04-05 05:50:17.546185 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:50:17.546195 | orchestrator | Sunday 05 April 2026 05:49:40 +0000 (0:00:01.150) 0:36:17.658 ********** 2026-04-05 05:50:17.546206 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546216 | orchestrator | 2026-04-05 05:50:17.546227 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:50:17.546237 | orchestrator | Sunday 05 April 2026 05:49:42 +0000 (0:00:01.139) 0:36:18.798 ********** 2026-04-05 05:50:17.546248 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.546258 | orchestrator | 2026-04-05 05:50:17.546269 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:50:17.546279 | orchestrator | Sunday 05 April 2026 05:49:43 +0000 (0:00:01.141) 0:36:19.940 ********** 2026-04-05 05:50:17.546290 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.546300 | orchestrator | 2026-04-05 05:50:17.546311 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:50:17.546336 | orchestrator | Sunday 05 April 2026 05:49:44 +0000 (0:00:01.118) 0:36:21.058 ********** 2026-04-05 05:50:17.546347 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.546358 | orchestrator | 2026-04-05 05:50:17.546368 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:50:17.546379 | orchestrator | Sunday 05 April 2026 05:49:45 +0000 (0:00:01.133) 0:36:22.191 ********** 2026-04-05 05:50:17.546389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546400 | orchestrator | 2026-04-05 05:50:17.546410 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:50:17.546421 | orchestrator | Sunday 05 April 2026 05:49:46 +0000 (0:00:01.202) 0:36:23.393 ********** 2026-04-05 05:50:17.546431 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546442 | orchestrator | 2026-04-05 05:50:17.546452 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:50:17.546463 | orchestrator | Sunday 05 April 2026 05:49:47 +0000 (0:00:01.152) 0:36:24.546 ********** 2026-04-05 05:50:17.546473 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546484 | orchestrator | 2026-04-05 05:50:17.546494 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:50:17.546514 | orchestrator | Sunday 05 April 2026 05:49:48 +0000 (0:00:01.108) 0:36:25.655 ********** 2026-04-05 05:50:17.546524 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.546535 | orchestrator | 2026-04-05 05:50:17.546545 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:50:17.546556 | orchestrator | Sunday 05 April 2026 05:49:50 +0000 (0:00:01.203) 0:36:26.858 ********** 2026-04-05 05:50:17.546567 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.546577 | orchestrator | 2026-04-05 05:50:17.546587 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:50:17.546598 | orchestrator | Sunday 05 April 2026 05:49:51 +0000 (0:00:01.372) 0:36:28.230 ********** 2026-04-05 05:50:17.546608 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546619 | orchestrator | 2026-04-05 05:50:17.546646 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:50:17.546657 | orchestrator | Sunday 05 April 2026 05:49:52 +0000 (0:00:01.136) 0:36:29.366 ********** 2026-04-05 05:50:17.546668 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546678 | orchestrator | 2026-04-05 05:50:17.546689 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:50:17.546706 | orchestrator | Sunday 05 April 2026 05:49:53 +0000 (0:00:01.137) 0:36:30.504 ********** 2026-04-05 05:50:17.546724 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546742 | orchestrator | 2026-04-05 05:50:17.546760 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:50:17.546810 | orchestrator | Sunday 05 April 2026 05:49:54 +0000 (0:00:01.135) 0:36:31.639 ********** 2026-04-05 05:50:17.546835 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546852 | orchestrator | 2026-04-05 05:50:17.546870 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:50:17.546889 | orchestrator | Sunday 05 April 2026 05:49:56 +0000 (0:00:01.171) 0:36:32.811 ********** 2026-04-05 05:50:17.546907 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546926 | orchestrator | 2026-04-05 05:50:17.546945 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:50:17.546963 | orchestrator | Sunday 05 April 2026 05:49:57 +0000 (0:00:01.163) 0:36:33.974 ********** 2026-04-05 05:50:17.546978 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.546988 | orchestrator | 2026-04-05 05:50:17.546999 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:50:17.547010 | orchestrator | Sunday 05 April 2026 05:49:58 +0000 (0:00:01.156) 0:36:35.131 ********** 2026-04-05 05:50:17.547020 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547031 | orchestrator | 2026-04-05 05:50:17.547041 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:50:17.547053 | orchestrator | Sunday 05 April 2026 05:49:59 +0000 (0:00:01.092) 0:36:36.224 ********** 2026-04-05 05:50:17.547064 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547074 | orchestrator | 2026-04-05 05:50:17.547085 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:50:17.547096 | orchestrator | Sunday 05 April 2026 05:50:00 +0000 (0:00:01.168) 0:36:37.392 ********** 2026-04-05 05:50:17.547106 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547117 | orchestrator | 2026-04-05 05:50:17.547127 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:50:17.547138 | orchestrator | Sunday 05 April 2026 05:50:01 +0000 (0:00:01.172) 0:36:38.565 ********** 2026-04-05 05:50:17.547148 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547158 | orchestrator | 2026-04-05 05:50:17.547169 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:50:17.547185 | orchestrator | Sunday 05 April 2026 05:50:03 +0000 (0:00:01.175) 0:36:39.741 ********** 2026-04-05 05:50:17.547211 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547231 | orchestrator | 2026-04-05 05:50:17.547249 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:50:17.547280 | orchestrator | Sunday 05 April 2026 05:50:04 +0000 (0:00:01.142) 0:36:40.883 ********** 2026-04-05 05:50:17.547296 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547313 | orchestrator | 2026-04-05 05:50:17.547330 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:50:17.547348 | orchestrator | Sunday 05 April 2026 05:50:05 +0000 (0:00:01.287) 0:36:42.171 ********** 2026-04-05 05:50:17.547366 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.547385 | orchestrator | 2026-04-05 05:50:17.547400 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:50:17.547410 | orchestrator | Sunday 05 April 2026 05:50:07 +0000 (0:00:01.977) 0:36:44.148 ********** 2026-04-05 05:50:17.547421 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.547431 | orchestrator | 2026-04-05 05:50:17.547442 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:50:17.547452 | orchestrator | Sunday 05 April 2026 05:50:09 +0000 (0:00:02.163) 0:36:46.312 ********** 2026-04-05 05:50:17.547474 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-05 05:50:17.547492 | orchestrator | 2026-04-05 05:50:17.547509 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:50:17.547526 | orchestrator | Sunday 05 April 2026 05:50:10 +0000 (0:00:01.208) 0:36:47.520 ********** 2026-04-05 05:50:17.547546 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547563 | orchestrator | 2026-04-05 05:50:17.547582 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:50:17.547599 | orchestrator | Sunday 05 April 2026 05:50:11 +0000 (0:00:01.164) 0:36:48.685 ********** 2026-04-05 05:50:17.547617 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547635 | orchestrator | 2026-04-05 05:50:17.547652 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:50:17.547670 | orchestrator | Sunday 05 April 2026 05:50:13 +0000 (0:00:01.155) 0:36:49.841 ********** 2026-04-05 05:50:17.547688 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:50:17.547707 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:50:17.547726 | orchestrator | 2026-04-05 05:50:17.547744 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:50:17.547762 | orchestrator | Sunday 05 April 2026 05:50:14 +0000 (0:00:01.789) 0:36:51.631 ********** 2026-04-05 05:50:17.547780 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:50:17.547827 | orchestrator | 2026-04-05 05:50:17.547844 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:50:17.547862 | orchestrator | Sunday 05 April 2026 05:50:16 +0000 (0:00:01.449) 0:36:53.080 ********** 2026-04-05 05:50:17.547880 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:50:17.547897 | orchestrator | 2026-04-05 05:50:17.547916 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:50:17.547953 | orchestrator | Sunday 05 April 2026 05:50:17 +0000 (0:00:01.170) 0:36:54.251 ********** 2026-04-05 05:51:04.018233 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018352 | orchestrator | 2026-04-05 05:51:04.018370 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:51:04.018383 | orchestrator | Sunday 05 April 2026 05:50:18 +0000 (0:00:01.209) 0:36:55.461 ********** 2026-04-05 05:51:04.018394 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018405 | orchestrator | 2026-04-05 05:51:04.018417 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:51:04.018428 | orchestrator | Sunday 05 April 2026 05:50:19 +0000 (0:00:01.161) 0:36:56.622 ********** 2026-04-05 05:51:04.018439 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-05 05:51:04.018451 | orchestrator | 2026-04-05 05:51:04.018462 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:51:04.018497 | orchestrator | Sunday 05 April 2026 05:50:21 +0000 (0:00:01.380) 0:36:58.003 ********** 2026-04-05 05:51:04.018509 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:04.018521 | orchestrator | 2026-04-05 05:51:04.018532 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:51:04.018544 | orchestrator | Sunday 05 April 2026 05:50:22 +0000 (0:00:01.711) 0:36:59.714 ********** 2026-04-05 05:51:04.018555 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:51:04.018566 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:51:04.018577 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:51:04.018587 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018598 | orchestrator | 2026-04-05 05:51:04.018609 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:51:04.018620 | orchestrator | Sunday 05 April 2026 05:50:24 +0000 (0:00:01.146) 0:37:00.861 ********** 2026-04-05 05:51:04.018630 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018641 | orchestrator | 2026-04-05 05:51:04.018652 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:51:04.018663 | orchestrator | Sunday 05 April 2026 05:50:25 +0000 (0:00:01.136) 0:37:01.998 ********** 2026-04-05 05:51:04.018674 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018684 | orchestrator | 2026-04-05 05:51:04.018695 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:51:04.018706 | orchestrator | Sunday 05 April 2026 05:50:26 +0000 (0:00:01.176) 0:37:03.174 ********** 2026-04-05 05:51:04.018717 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018782 | orchestrator | 2026-04-05 05:51:04.018797 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:51:04.018810 | orchestrator | Sunday 05 April 2026 05:50:27 +0000 (0:00:01.146) 0:37:04.320 ********** 2026-04-05 05:51:04.018823 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018837 | orchestrator | 2026-04-05 05:51:04.018850 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:51:04.018863 | orchestrator | Sunday 05 April 2026 05:50:28 +0000 (0:00:01.151) 0:37:05.472 ********** 2026-04-05 05:51:04.018876 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.018889 | orchestrator | 2026-04-05 05:51:04.018903 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:51:04.018916 | orchestrator | Sunday 05 April 2026 05:50:29 +0000 (0:00:01.146) 0:37:06.618 ********** 2026-04-05 05:51:04.018929 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:04.018942 | orchestrator | 2026-04-05 05:51:04.018956 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:51:04.018969 | orchestrator | Sunday 05 April 2026 05:50:32 +0000 (0:00:02.452) 0:37:09.071 ********** 2026-04-05 05:51:04.018982 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:04.018995 | orchestrator | 2026-04-05 05:51:04.019008 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:51:04.019021 | orchestrator | Sunday 05 April 2026 05:50:33 +0000 (0:00:01.140) 0:37:10.212 ********** 2026-04-05 05:51:04.019048 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-05 05:51:04.019061 | orchestrator | 2026-04-05 05:51:04.019074 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:51:04.019087 | orchestrator | Sunday 05 April 2026 05:50:34 +0000 (0:00:01.130) 0:37:11.343 ********** 2026-04-05 05:51:04.019100 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019114 | orchestrator | 2026-04-05 05:51:04.019127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:51:04.019140 | orchestrator | Sunday 05 April 2026 05:50:35 +0000 (0:00:01.144) 0:37:12.487 ********** 2026-04-05 05:51:04.019153 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019174 | orchestrator | 2026-04-05 05:51:04.019185 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:51:04.019196 | orchestrator | Sunday 05 April 2026 05:50:37 +0000 (0:00:01.413) 0:37:13.901 ********** 2026-04-05 05:51:04.019206 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019217 | orchestrator | 2026-04-05 05:51:04.019228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:51:04.019239 | orchestrator | Sunday 05 April 2026 05:50:38 +0000 (0:00:01.146) 0:37:15.047 ********** 2026-04-05 05:51:04.019249 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019260 | orchestrator | 2026-04-05 05:51:04.019271 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:51:04.019282 | orchestrator | Sunday 05 April 2026 05:50:39 +0000 (0:00:01.176) 0:37:16.224 ********** 2026-04-05 05:51:04.019292 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019303 | orchestrator | 2026-04-05 05:51:04.019314 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:51:04.019325 | orchestrator | Sunday 05 April 2026 05:50:40 +0000 (0:00:01.166) 0:37:17.390 ********** 2026-04-05 05:51:04.019336 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019346 | orchestrator | 2026-04-05 05:51:04.019374 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:51:04.019385 | orchestrator | Sunday 05 April 2026 05:50:41 +0000 (0:00:01.163) 0:37:18.554 ********** 2026-04-05 05:51:04.019396 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019407 | orchestrator | 2026-04-05 05:51:04.019418 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:51:04.019428 | orchestrator | Sunday 05 April 2026 05:50:42 +0000 (0:00:01.134) 0:37:19.689 ********** 2026-04-05 05:51:04.019439 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019450 | orchestrator | 2026-04-05 05:51:04.019461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:51:04.019471 | orchestrator | Sunday 05 April 2026 05:50:44 +0000 (0:00:01.147) 0:37:20.836 ********** 2026-04-05 05:51:04.019482 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:04.019493 | orchestrator | 2026-04-05 05:51:04.019503 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:51:04.019514 | orchestrator | Sunday 05 April 2026 05:50:45 +0000 (0:00:01.151) 0:37:21.987 ********** 2026-04-05 05:51:04.019525 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-05 05:51:04.019536 | orchestrator | 2026-04-05 05:51:04.019546 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:51:04.019557 | orchestrator | Sunday 05 April 2026 05:50:46 +0000 (0:00:01.083) 0:37:23.071 ********** 2026-04-05 05:51:04.019568 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-05 05:51:04.019579 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-05 05:51:04.019590 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-05 05:51:04.019601 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-05 05:51:04.019611 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-05 05:51:04.019622 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-05 05:51:04.019633 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-05 05:51:04.019643 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:51:04.019654 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:51:04.019665 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:51:04.019675 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:51:04.019686 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:51:04.019697 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:51:04.019708 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:51:04.019725 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-05 05:51:04.019757 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-05 05:51:04.019767 | orchestrator | 2026-04-05 05:51:04.019778 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:51:04.019789 | orchestrator | Sunday 05 April 2026 05:50:52 +0000 (0:00:06.588) 0:37:29.659 ********** 2026-04-05 05:51:04.019800 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-05 05:51:04.019810 | orchestrator | 2026-04-05 05:51:04.019821 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 05:51:04.019832 | orchestrator | Sunday 05 April 2026 05:50:54 +0000 (0:00:01.788) 0:37:31.448 ********** 2026-04-05 05:51:04.019842 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 05:51:04.019855 | orchestrator | 2026-04-05 05:51:04.019865 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 05:51:04.019876 | orchestrator | Sunday 05 April 2026 05:50:56 +0000 (0:00:01.565) 0:37:33.013 ********** 2026-04-05 05:51:04.019892 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 05:51:04.019903 | orchestrator | 2026-04-05 05:51:04.019914 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:51:04.019924 | orchestrator | Sunday 05 April 2026 05:50:58 +0000 (0:00:01.942) 0:37:34.955 ********** 2026-04-05 05:51:04.019935 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019946 | orchestrator | 2026-04-05 05:51:04.019957 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:51:04.019967 | orchestrator | Sunday 05 April 2026 05:50:59 +0000 (0:00:01.149) 0:37:36.105 ********** 2026-04-05 05:51:04.019978 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.019989 | orchestrator | 2026-04-05 05:51:04.019999 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:51:04.020010 | orchestrator | Sunday 05 April 2026 05:51:00 +0000 (0:00:01.126) 0:37:37.231 ********** 2026-04-05 05:51:04.020020 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.020031 | orchestrator | 2026-04-05 05:51:04.020042 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:51:04.020053 | orchestrator | Sunday 05 April 2026 05:51:01 +0000 (0:00:01.121) 0:37:38.352 ********** 2026-04-05 05:51:04.020063 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.020074 | orchestrator | 2026-04-05 05:51:04.020084 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:51:04.020095 | orchestrator | Sunday 05 April 2026 05:51:02 +0000 (0:00:01.117) 0:37:39.469 ********** 2026-04-05 05:51:04.020106 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.020116 | orchestrator | 2026-04-05 05:51:04.020127 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:51:04.020138 | orchestrator | Sunday 05 April 2026 05:51:03 +0000 (0:00:01.106) 0:37:40.576 ********** 2026-04-05 05:51:04.020149 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:04.020159 | orchestrator | 2026-04-05 05:51:04.020177 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:51:55.288356 | orchestrator | Sunday 05 April 2026 05:51:05 +0000 (0:00:01.152) 0:37:41.729 ********** 2026-04-05 05:51:55.288509 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.288527 | orchestrator | 2026-04-05 05:51:55.288540 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:51:55.288553 | orchestrator | Sunday 05 April 2026 05:51:06 +0000 (0:00:01.203) 0:37:42.932 ********** 2026-04-05 05:51:55.288564 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.288575 | orchestrator | 2026-04-05 05:51:55.288587 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:51:55.288627 | orchestrator | Sunday 05 April 2026 05:51:07 +0000 (0:00:01.207) 0:37:44.139 ********** 2026-04-05 05:51:55.288639 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.288650 | orchestrator | 2026-04-05 05:51:55.288661 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:51:55.288748 | orchestrator | Sunday 05 April 2026 05:51:08 +0000 (0:00:01.140) 0:37:45.280 ********** 2026-04-05 05:51:55.288761 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.288772 | orchestrator | 2026-04-05 05:51:55.288783 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:51:55.288794 | orchestrator | Sunday 05 April 2026 05:51:09 +0000 (0:00:01.214) 0:37:46.495 ********** 2026-04-05 05:51:55.288804 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:55.288817 | orchestrator | 2026-04-05 05:51:55.288827 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:51:55.288838 | orchestrator | Sunday 05 April 2026 05:51:11 +0000 (0:00:01.346) 0:37:47.842 ********** 2026-04-05 05:51:55.288849 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-05 05:51:55.288861 | orchestrator | 2026-04-05 05:51:55.288874 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:51:55.288886 | orchestrator | Sunday 05 April 2026 05:51:16 +0000 (0:00:04.912) 0:37:52.754 ********** 2026-04-05 05:51:55.288900 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 05:51:55.288913 | orchestrator | 2026-04-05 05:51:55.288926 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:51:55.288939 | orchestrator | Sunday 05 April 2026 05:51:17 +0000 (0:00:01.182) 0:37:53.936 ********** 2026-04-05 05:51:55.288955 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-05 05:51:55.288972 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-05 05:51:55.288987 | orchestrator | 2026-04-05 05:51:55.289001 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:51:55.289013 | orchestrator | Sunday 05 April 2026 05:51:24 +0000 (0:00:07.666) 0:38:01.604 ********** 2026-04-05 05:51:55.289026 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289038 | orchestrator | 2026-04-05 05:51:55.289050 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:51:55.289081 | orchestrator | Sunday 05 April 2026 05:51:26 +0000 (0:00:01.221) 0:38:02.825 ********** 2026-04-05 05:51:55.289094 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289107 | orchestrator | 2026-04-05 05:51:55.289119 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:51:55.289132 | orchestrator | Sunday 05 April 2026 05:51:27 +0000 (0:00:01.172) 0:38:03.998 ********** 2026-04-05 05:51:55.289145 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289157 | orchestrator | 2026-04-05 05:51:55.289168 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:51:55.289179 | orchestrator | Sunday 05 April 2026 05:51:28 +0000 (0:00:01.147) 0:38:05.145 ********** 2026-04-05 05:51:55.289189 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289200 | orchestrator | 2026-04-05 05:51:55.289211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:51:55.289231 | orchestrator | Sunday 05 April 2026 05:51:29 +0000 (0:00:01.151) 0:38:06.297 ********** 2026-04-05 05:51:55.289242 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289253 | orchestrator | 2026-04-05 05:51:55.289264 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:51:55.289275 | orchestrator | Sunday 05 April 2026 05:51:30 +0000 (0:00:01.194) 0:38:07.492 ********** 2026-04-05 05:51:55.289285 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:55.289296 | orchestrator | 2026-04-05 05:51:55.289307 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:51:55.289317 | orchestrator | Sunday 05 April 2026 05:51:31 +0000 (0:00:01.222) 0:38:08.714 ********** 2026-04-05 05:51:55.289328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:51:55.289340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:51:55.289350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:51:55.289362 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289372 | orchestrator | 2026-04-05 05:51:55.289384 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:51:55.289413 | orchestrator | Sunday 05 April 2026 05:51:33 +0000 (0:00:01.434) 0:38:10.149 ********** 2026-04-05 05:51:55.289424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:51:55.289435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:51:55.289445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:51:55.289456 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289467 | orchestrator | 2026-04-05 05:51:55.289478 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:51:55.289488 | orchestrator | Sunday 05 April 2026 05:51:34 +0000 (0:00:01.481) 0:38:11.631 ********** 2026-04-05 05:51:55.289499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 05:51:55.289509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 05:51:55.289520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 05:51:55.289530 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289541 | orchestrator | 2026-04-05 05:51:55.289551 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:51:55.289562 | orchestrator | Sunday 05 April 2026 05:51:36 +0000 (0:00:01.936) 0:38:13.568 ********** 2026-04-05 05:51:55.289573 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:55.289583 | orchestrator | 2026-04-05 05:51:55.289594 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:51:55.289604 | orchestrator | Sunday 05 April 2026 05:51:38 +0000 (0:00:01.165) 0:38:14.733 ********** 2026-04-05 05:51:55.289615 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 05:51:55.289626 | orchestrator | 2026-04-05 05:51:55.289636 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:51:55.289647 | orchestrator | Sunday 05 April 2026 05:51:39 +0000 (0:00:01.866) 0:38:16.600 ********** 2026-04-05 05:51:55.289657 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:55.289685 | orchestrator | 2026-04-05 05:51:55.289696 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-05 05:51:55.289707 | orchestrator | Sunday 05 April 2026 05:51:41 +0000 (0:00:01.843) 0:38:18.444 ********** 2026-04-05 05:51:55.289718 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:55.289728 | orchestrator | 2026-04-05 05:51:55.289739 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:51:55.289750 | orchestrator | Sunday 05 April 2026 05:51:42 +0000 (0:00:01.148) 0:38:19.592 ********** 2026-04-05 05:51:55.289761 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:51:55.289773 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:51:55.289783 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:51:55.289801 | orchestrator | 2026-04-05 05:51:55.289812 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-05 05:51:55.289822 | orchestrator | Sunday 05 April 2026 05:51:44 +0000 (0:00:01.681) 0:38:21.274 ********** 2026-04-05 05:51:55.289833 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-04-05 05:51:55.289844 | orchestrator | 2026-04-05 05:51:55.289854 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-05 05:51:55.289865 | orchestrator | Sunday 05 April 2026 05:51:46 +0000 (0:00:01.467) 0:38:22.741 ********** 2026-04-05 05:51:55.289875 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289886 | orchestrator | 2026-04-05 05:51:55.289897 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-05 05:51:55.289908 | orchestrator | Sunday 05 April 2026 05:51:47 +0000 (0:00:01.119) 0:38:23.861 ********** 2026-04-05 05:51:55.289918 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.289929 | orchestrator | 2026-04-05 05:51:55.289940 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-05 05:51:55.289955 | orchestrator | Sunday 05 April 2026 05:51:48 +0000 (0:00:01.103) 0:38:24.964 ********** 2026-04-05 05:51:55.289966 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:55.289977 | orchestrator | 2026-04-05 05:51:55.289988 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-05 05:51:55.289998 | orchestrator | Sunday 05 April 2026 05:51:49 +0000 (0:00:01.438) 0:38:26.403 ********** 2026-04-05 05:51:55.290009 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:51:55.290093 | orchestrator | 2026-04-05 05:51:55.290105 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-05 05:51:55.290116 | orchestrator | Sunday 05 April 2026 05:51:50 +0000 (0:00:01.152) 0:38:27.555 ********** 2026-04-05 05:51:55.290127 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 05:51:55.290138 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 05:51:55.290148 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 05:51:55.290159 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 05:51:55.290170 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 05:51:55.290180 | orchestrator | 2026-04-05 05:51:55.290191 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-05 05:51:55.290201 | orchestrator | Sunday 05 April 2026 05:51:53 +0000 (0:00:03.035) 0:38:30.591 ********** 2026-04-05 05:51:55.290212 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:51:55.290223 | orchestrator | 2026-04-05 05:51:55.290233 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-05 05:51:55.290244 | orchestrator | Sunday 05 April 2026 05:51:55 +0000 (0:00:01.168) 0:38:31.759 ********** 2026-04-05 05:51:55.290255 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-04-05 05:51:55.290266 | orchestrator | 2026-04-05 05:51:55.290276 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-05 05:53:01.518977 | orchestrator | Sunday 05 April 2026 05:51:56 +0000 (0:00:01.483) 0:38:33.243 ********** 2026-04-05 05:53:01.519097 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 05:53:01.519113 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-05 05:53:01.519126 | orchestrator | 2026-04-05 05:53:01.519138 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-05 05:53:01.519150 | orchestrator | Sunday 05 April 2026 05:51:58 +0000 (0:00:01.849) 0:38:35.093 ********** 2026-04-05 05:53:01.519161 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:53:01.519172 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 05:53:01.519209 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 05:53:01.519222 | orchestrator | 2026-04-05 05:53:01.519233 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-05 05:53:01.519244 | orchestrator | Sunday 05 April 2026 05:52:01 +0000 (0:00:03.131) 0:38:38.225 ********** 2026-04-05 05:53:01.519255 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 05:53:01.519266 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 05:53:01.519278 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:53:01.519289 | orchestrator | 2026-04-05 05:53:01.519299 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-05 05:53:01.519310 | orchestrator | Sunday 05 April 2026 05:52:03 +0000 (0:00:02.001) 0:38:40.227 ********** 2026-04-05 05:53:01.519321 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.519332 | orchestrator | 2026-04-05 05:53:01.519343 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-05 05:53:01.519353 | orchestrator | Sunday 05 April 2026 05:52:04 +0000 (0:00:01.221) 0:38:41.448 ********** 2026-04-05 05:53:01.519364 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.519375 | orchestrator | 2026-04-05 05:53:01.519385 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-05 05:53:01.519396 | orchestrator | Sunday 05 April 2026 05:52:05 +0000 (0:00:01.196) 0:38:42.645 ********** 2026-04-05 05:53:01.519406 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.519417 | orchestrator | 2026-04-05 05:53:01.519428 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-05 05:53:01.519438 | orchestrator | Sunday 05 April 2026 05:52:07 +0000 (0:00:01.219) 0:38:43.865 ********** 2026-04-05 05:53:01.519449 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-04-05 05:53:01.519461 | orchestrator | 2026-04-05 05:53:01.519472 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-05 05:53:01.519482 | orchestrator | Sunday 05 April 2026 05:52:08 +0000 (0:00:01.504) 0:38:45.369 ********** 2026-04-05 05:53:01.519493 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:53:01.519506 | orchestrator | 2026-04-05 05:53:01.519519 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-05 05:53:01.519532 | orchestrator | Sunday 05 April 2026 05:52:10 +0000 (0:00:01.474) 0:38:46.844 ********** 2026-04-05 05:53:01.519545 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:53:01.519558 | orchestrator | 2026-04-05 05:53:01.519572 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-05 05:53:01.519584 | orchestrator | Sunday 05 April 2026 05:52:13 +0000 (0:00:03.616) 0:38:50.460 ********** 2026-04-05 05:53:01.519597 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-04-05 05:53:01.519630 | orchestrator | 2026-04-05 05:53:01.519644 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-05 05:53:01.519657 | orchestrator | Sunday 05 April 2026 05:52:15 +0000 (0:00:01.639) 0:38:52.099 ********** 2026-04-05 05:53:01.519669 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:53:01.519682 | orchestrator | 2026-04-05 05:53:01.519695 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-05 05:53:01.519723 | orchestrator | Sunday 05 April 2026 05:52:17 +0000 (0:00:01.954) 0:38:54.054 ********** 2026-04-05 05:53:01.519736 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:53:01.519749 | orchestrator | 2026-04-05 05:53:01.519762 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-05 05:53:01.519775 | orchestrator | Sunday 05 April 2026 05:52:19 +0000 (0:00:01.953) 0:38:56.007 ********** 2026-04-05 05:53:01.519789 | orchestrator | ok: [testbed-node-3] 2026-04-05 05:53:01.519802 | orchestrator | 2026-04-05 05:53:01.519815 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-05 05:53:01.519828 | orchestrator | Sunday 05 April 2026 05:52:21 +0000 (0:00:02.235) 0:38:58.243 ********** 2026-04-05 05:53:01.519840 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.519862 | orchestrator | 2026-04-05 05:53:01.519875 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-05 05:53:01.519886 | orchestrator | Sunday 05 April 2026 05:52:22 +0000 (0:00:01.180) 0:38:59.424 ********** 2026-04-05 05:53:01.519897 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.519908 | orchestrator | 2026-04-05 05:53:01.519919 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-05 05:53:01.519930 | orchestrator | Sunday 05 April 2026 05:52:23 +0000 (0:00:01.126) 0:39:00.550 ********** 2026-04-05 05:53:01.519940 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-05 05:53:01.519951 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-05 05:53:01.519962 | orchestrator | 2026-04-05 05:53:01.519973 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-05 05:53:01.519983 | orchestrator | Sunday 05 April 2026 05:52:25 +0000 (0:00:01.811) 0:39:02.362 ********** 2026-04-05 05:53:01.519994 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-05 05:53:01.520004 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-05 05:53:01.520015 | orchestrator | 2026-04-05 05:53:01.520026 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-05 05:53:01.520037 | orchestrator | Sunday 05 April 2026 05:52:28 +0000 (0:00:02.867) 0:39:05.229 ********** 2026-04-05 05:53:01.520047 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-05 05:53:01.520076 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-05 05:53:01.520088 | orchestrator | 2026-04-05 05:53:01.520099 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-05 05:53:01.520109 | orchestrator | Sunday 05 April 2026 05:52:33 +0000 (0:00:04.494) 0:39:09.724 ********** 2026-04-05 05:53:01.520120 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.520130 | orchestrator | 2026-04-05 05:53:01.520141 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-05 05:53:01.520151 | orchestrator | Sunday 05 April 2026 05:52:34 +0000 (0:00:01.222) 0:39:10.946 ********** 2026-04-05 05:53:01.520162 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.520173 | orchestrator | 2026-04-05 05:53:01.520183 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-05 05:53:01.520194 | orchestrator | Sunday 05 April 2026 05:52:35 +0000 (0:00:01.258) 0:39:12.205 ********** 2026-04-05 05:53:01.520205 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.520215 | orchestrator | 2026-04-05 05:53:01.520226 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-05 05:53:01.520237 | orchestrator | Sunday 05 April 2026 05:52:36 +0000 (0:00:01.356) 0:39:13.562 ********** 2026-04-05 05:53:01.520247 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.520258 | orchestrator | 2026-04-05 05:53:01.520269 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-05 05:53:01.520280 | orchestrator | Sunday 05 April 2026 05:52:38 +0000 (0:00:01.184) 0:39:14.747 ********** 2026-04-05 05:53:01.520290 | orchestrator | skipping: [testbed-node-3] 2026-04-05 05:53:01.520301 | orchestrator | 2026-04-05 05:53:01.520311 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-05 05:53:01.520322 | orchestrator | Sunday 05 April 2026 05:52:39 +0000 (0:00:01.163) 0:39:15.910 ********** 2026-04-05 05:53:01.520333 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-05 05:53:01.520345 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-05 05:53:01.520356 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:53:01.520366 | orchestrator | 2026-04-05 05:53:01.520377 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-05 05:53:01.520388 | orchestrator | 2026-04-05 05:53:01.520398 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:53:01.520409 | orchestrator | Sunday 05 April 2026 05:52:47 +0000 (0:00:07.943) 0:39:23.853 ********** 2026-04-05 05:53:01.520426 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-05 05:53:01.520437 | orchestrator | 2026-04-05 05:53:01.520448 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:53:01.520458 | orchestrator | Sunday 05 April 2026 05:52:48 +0000 (0:00:01.156) 0:39:25.010 ********** 2026-04-05 05:53:01.520469 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520480 | orchestrator | 2026-04-05 05:53:01.520490 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:53:01.520501 | orchestrator | Sunday 05 April 2026 05:52:49 +0000 (0:00:01.502) 0:39:26.513 ********** 2026-04-05 05:53:01.520512 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520522 | orchestrator | 2026-04-05 05:53:01.520533 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:53:01.520544 | orchestrator | Sunday 05 April 2026 05:52:50 +0000 (0:00:01.189) 0:39:27.702 ********** 2026-04-05 05:53:01.520554 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520565 | orchestrator | 2026-04-05 05:53:01.520575 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:53:01.520586 | orchestrator | Sunday 05 April 2026 05:52:52 +0000 (0:00:01.437) 0:39:29.139 ********** 2026-04-05 05:53:01.520596 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520654 | orchestrator | 2026-04-05 05:53:01.520666 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:53:01.520682 | orchestrator | Sunday 05 April 2026 05:52:53 +0000 (0:00:01.124) 0:39:30.263 ********** 2026-04-05 05:53:01.520692 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520703 | orchestrator | 2026-04-05 05:53:01.520714 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:53:01.520724 | orchestrator | Sunday 05 April 2026 05:52:54 +0000 (0:00:01.245) 0:39:31.509 ********** 2026-04-05 05:53:01.520735 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520745 | orchestrator | 2026-04-05 05:53:01.520756 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:53:01.520766 | orchestrator | Sunday 05 April 2026 05:52:56 +0000 (0:00:01.208) 0:39:32.718 ********** 2026-04-05 05:53:01.520777 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:01.520787 | orchestrator | 2026-04-05 05:53:01.520798 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:53:01.520809 | orchestrator | Sunday 05 April 2026 05:52:57 +0000 (0:00:01.340) 0:39:34.058 ********** 2026-04-05 05:53:01.520819 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520830 | orchestrator | 2026-04-05 05:53:01.520840 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:53:01.520851 | orchestrator | Sunday 05 April 2026 05:52:58 +0000 (0:00:01.154) 0:39:35.213 ********** 2026-04-05 05:53:01.520862 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:53:01.520872 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:53:01.520883 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:53:01.520894 | orchestrator | 2026-04-05 05:53:01.520904 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:53:01.520915 | orchestrator | Sunday 05 April 2026 05:53:00 +0000 (0:00:01.759) 0:39:36.972 ********** 2026-04-05 05:53:01.520925 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:01.520936 | orchestrator | 2026-04-05 05:53:01.520947 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:53:01.520964 | orchestrator | Sunday 05 April 2026 05:53:01 +0000 (0:00:01.254) 0:39:38.227 ********** 2026-04-05 05:53:25.764407 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:53:25.764521 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:53:25.764536 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:53:25.764575 | orchestrator | 2026-04-05 05:53:25.764630 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:53:25.764643 | orchestrator | Sunday 05 April 2026 05:53:04 +0000 (0:00:02.795) 0:39:41.023 ********** 2026-04-05 05:53:25.764654 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 05:53:25.764666 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 05:53:25.764676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 05:53:25.764687 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.764698 | orchestrator | 2026-04-05 05:53:25.764709 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:53:25.764720 | orchestrator | Sunday 05 April 2026 05:53:05 +0000 (0:00:01.423) 0:39:42.446 ********** 2026-04-05 05:53:25.764732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:53:25.764747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:53:25.764758 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:53:25.764769 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.764779 | orchestrator | 2026-04-05 05:53:25.764790 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:53:25.764801 | orchestrator | Sunday 05 April 2026 05:53:07 +0000 (0:00:01.754) 0:39:44.201 ********** 2026-04-05 05:53:25.764814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:25.764828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:25.764856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:25.764868 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.764879 | orchestrator | 2026-04-05 05:53:25.764890 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:53:25.764901 | orchestrator | Sunday 05 April 2026 05:53:08 +0000 (0:00:01.216) 0:39:45.417 ********** 2026-04-05 05:53:25.764915 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:53:02.059125', 'end': '2026-04-05 05:53:02.091632', 'delta': '0:00:00.032507', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:53:25.764956 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:53:02.588127', 'end': '2026-04-05 05:53:02.644390', 'delta': '0:00:00.056263', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:53:25.764971 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:53:03.156553', 'end': '2026-04-05 05:53:03.191493', 'delta': '0:00:00.034940', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:53:25.764985 | orchestrator | 2026-04-05 05:53:25.764998 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:53:25.765009 | orchestrator | Sunday 05 April 2026 05:53:09 +0000 (0:00:01.262) 0:39:46.680 ********** 2026-04-05 05:53:25.765020 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:25.765032 | orchestrator | 2026-04-05 05:53:25.765043 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:53:25.765053 | orchestrator | Sunday 05 April 2026 05:53:11 +0000 (0:00:01.318) 0:39:47.999 ********** 2026-04-05 05:53:25.765064 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.765075 | orchestrator | 2026-04-05 05:53:25.765086 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:53:25.765097 | orchestrator | Sunday 05 April 2026 05:53:12 +0000 (0:00:01.292) 0:39:49.291 ********** 2026-04-05 05:53:25.765107 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:25.765118 | orchestrator | 2026-04-05 05:53:25.765129 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:53:25.765140 | orchestrator | Sunday 05 April 2026 05:53:13 +0000 (0:00:01.133) 0:39:50.425 ********** 2026-04-05 05:53:25.765150 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:53:25.765161 | orchestrator | 2026-04-05 05:53:25.765172 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:53:25.765182 | orchestrator | Sunday 05 April 2026 05:53:16 +0000 (0:00:02.402) 0:39:52.828 ********** 2026-04-05 05:53:25.765193 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:25.765204 | orchestrator | 2026-04-05 05:53:25.765214 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:53:25.765225 | orchestrator | Sunday 05 April 2026 05:53:17 +0000 (0:00:01.287) 0:39:54.115 ********** 2026-04-05 05:53:25.765241 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.765252 | orchestrator | 2026-04-05 05:53:25.765263 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:53:25.765273 | orchestrator | Sunday 05 April 2026 05:53:18 +0000 (0:00:01.152) 0:39:55.268 ********** 2026-04-05 05:53:25.765291 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.765302 | orchestrator | 2026-04-05 05:53:25.765312 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:53:25.765323 | orchestrator | Sunday 05 April 2026 05:53:19 +0000 (0:00:01.252) 0:39:56.521 ********** 2026-04-05 05:53:25.765334 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.765345 | orchestrator | 2026-04-05 05:53:25.765355 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:53:25.765366 | orchestrator | Sunday 05 April 2026 05:53:20 +0000 (0:00:01.138) 0:39:57.659 ********** 2026-04-05 05:53:25.765376 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.765387 | orchestrator | 2026-04-05 05:53:25.765398 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:53:25.765409 | orchestrator | Sunday 05 April 2026 05:53:22 +0000 (0:00:01.220) 0:39:58.880 ********** 2026-04-05 05:53:25.765419 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:25.765430 | orchestrator | 2026-04-05 05:53:25.765441 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:53:25.765452 | orchestrator | Sunday 05 April 2026 05:53:23 +0000 (0:00:01.160) 0:40:00.040 ********** 2026-04-05 05:53:25.765462 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.765473 | orchestrator | 2026-04-05 05:53:25.765484 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:53:25.765495 | orchestrator | Sunday 05 April 2026 05:53:24 +0000 (0:00:01.115) 0:40:01.156 ********** 2026-04-05 05:53:25.765505 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:25.765516 | orchestrator | 2026-04-05 05:53:25.765527 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:53:25.765538 | orchestrator | Sunday 05 April 2026 05:53:25 +0000 (0:00:01.179) 0:40:02.336 ********** 2026-04-05 05:53:25.765548 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:25.765559 | orchestrator | 2026-04-05 05:53:25.765576 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:53:28.155512 | orchestrator | Sunday 05 April 2026 05:53:26 +0000 (0:00:01.130) 0:40:03.466 ********** 2026-04-05 05:53:28.155748 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:28.155771 | orchestrator | 2026-04-05 05:53:28.155784 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:53:28.155795 | orchestrator | Sunday 05 April 2026 05:53:27 +0000 (0:00:01.183) 0:40:04.650 ********** 2026-04-05 05:53:28.155809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:28.155827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}})  2026-04-05 05:53:28.155842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:53:28.155894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}})  2026-04-05 05:53:28.155908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:28.155920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:28.155951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:53:28.155964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:28.155975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:53:28.155987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:28.156005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}})  2026-04-05 05:53:28.156022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}})  2026-04-05 05:53:28.156034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:28.156061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:53:29.446329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:29.446454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:53:29.446496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:53:29.446523 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:29.446544 | orchestrator | 2026-04-05 05:53:29.446563 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:53:29.446624 | orchestrator | Sunday 05 April 2026 05:53:29 +0000 (0:00:01.328) 0:40:05.979 ********** 2026-04-05 05:53:29.446648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446671 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446687 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446760 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446866 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446878 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:29.446911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776306 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776474 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776552 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:53:34.776654 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:53:34.776668 | orchestrator | 2026-04-05 05:53:34.776679 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:53:34.776692 | orchestrator | Sunday 05 April 2026 05:53:30 +0000 (0:00:01.371) 0:40:07.350 ********** 2026-04-05 05:53:34.776703 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:34.776715 | orchestrator | 2026-04-05 05:53:34.776726 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:53:34.776737 | orchestrator | Sunday 05 April 2026 05:53:32 +0000 (0:00:01.512) 0:40:08.863 ********** 2026-04-05 05:53:34.776747 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:34.776758 | orchestrator | 2026-04-05 05:53:34.776769 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:53:34.776780 | orchestrator | Sunday 05 April 2026 05:53:33 +0000 (0:00:01.175) 0:40:10.039 ********** 2026-04-05 05:53:34.776790 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:53:34.776801 | orchestrator | 2026-04-05 05:53:34.776812 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:53:34.776831 | orchestrator | Sunday 05 April 2026 05:53:34 +0000 (0:00:01.447) 0:40:11.486 ********** 2026-04-05 05:54:17.884986 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885066 | orchestrator | 2026-04-05 05:54:17.885072 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:54:17.885078 | orchestrator | Sunday 05 April 2026 05:53:35 +0000 (0:00:01.193) 0:40:12.680 ********** 2026-04-05 05:54:17.885082 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885086 | orchestrator | 2026-04-05 05:54:17.885090 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:54:17.885094 | orchestrator | Sunday 05 April 2026 05:53:37 +0000 (0:00:01.248) 0:40:13.930 ********** 2026-04-05 05:54:17.885098 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885102 | orchestrator | 2026-04-05 05:54:17.885106 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:54:17.885110 | orchestrator | Sunday 05 April 2026 05:53:38 +0000 (0:00:01.184) 0:40:15.115 ********** 2026-04-05 05:54:17.885114 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 05:54:17.885118 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 05:54:17.885122 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 05:54:17.885126 | orchestrator | 2026-04-05 05:54:17.885130 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:54:17.885144 | orchestrator | Sunday 05 April 2026 05:53:40 +0000 (0:00:01.771) 0:40:16.887 ********** 2026-04-05 05:54:17.885148 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 05:54:17.885153 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 05:54:17.885157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 05:54:17.885160 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885164 | orchestrator | 2026-04-05 05:54:17.885168 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:54:17.885172 | orchestrator | Sunday 05 April 2026 05:53:41 +0000 (0:00:01.198) 0:40:18.085 ********** 2026-04-05 05:54:17.885176 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-05 05:54:17.885180 | orchestrator | 2026-04-05 05:54:17.885185 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:54:17.885190 | orchestrator | Sunday 05 April 2026 05:53:42 +0000 (0:00:01.180) 0:40:19.265 ********** 2026-04-05 05:54:17.885194 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885212 | orchestrator | 2026-04-05 05:54:17.885217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:54:17.885220 | orchestrator | Sunday 05 April 2026 05:53:43 +0000 (0:00:01.174) 0:40:20.439 ********** 2026-04-05 05:54:17.885224 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885228 | orchestrator | 2026-04-05 05:54:17.885232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:54:17.885235 | orchestrator | Sunday 05 April 2026 05:53:44 +0000 (0:00:01.162) 0:40:21.601 ********** 2026-04-05 05:54:17.885239 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885243 | orchestrator | 2026-04-05 05:54:17.885247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:54:17.885250 | orchestrator | Sunday 05 April 2026 05:53:46 +0000 (0:00:01.171) 0:40:22.773 ********** 2026-04-05 05:54:17.885254 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885258 | orchestrator | 2026-04-05 05:54:17.885262 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:54:17.885266 | orchestrator | Sunday 05 April 2026 05:53:47 +0000 (0:00:01.247) 0:40:24.020 ********** 2026-04-05 05:54:17.885269 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 05:54:17.885273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 05:54:17.885277 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 05:54:17.885281 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885285 | orchestrator | 2026-04-05 05:54:17.885288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:54:17.885292 | orchestrator | Sunday 05 April 2026 05:53:49 +0000 (0:00:01.797) 0:40:25.818 ********** 2026-04-05 05:54:17.885296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 05:54:17.885300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 05:54:17.885303 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 05:54:17.885307 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885311 | orchestrator | 2026-04-05 05:54:17.885314 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:54:17.885318 | orchestrator | Sunday 05 April 2026 05:53:51 +0000 (0:00:01.920) 0:40:27.738 ********** 2026-04-05 05:54:17.885322 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 05:54:17.885326 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 05:54:17.885329 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 05:54:17.885333 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885337 | orchestrator | 2026-04-05 05:54:17.885341 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:54:17.885344 | orchestrator | Sunday 05 April 2026 05:53:52 +0000 (0:00:01.429) 0:40:29.168 ********** 2026-04-05 05:54:17.885348 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885352 | orchestrator | 2026-04-05 05:54:17.885356 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:54:17.885359 | orchestrator | Sunday 05 April 2026 05:53:53 +0000 (0:00:01.135) 0:40:30.304 ********** 2026-04-05 05:54:17.885363 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 05:54:17.885367 | orchestrator | 2026-04-05 05:54:17.885371 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:54:17.885374 | orchestrator | Sunday 05 April 2026 05:53:54 +0000 (0:00:01.350) 0:40:31.654 ********** 2026-04-05 05:54:17.885387 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:54:17.885391 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:54:17.885394 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:54:17.885398 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:54:17.885406 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-05 05:54:17.885410 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:54:17.885413 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:54:17.885417 | orchestrator | 2026-04-05 05:54:17.885421 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:54:17.885424 | orchestrator | Sunday 05 April 2026 05:53:56 +0000 (0:00:01.960) 0:40:33.614 ********** 2026-04-05 05:54:17.885428 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:54:17.885432 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:54:17.885439 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:54:17.885442 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:54:17.885446 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-05 05:54:17.885450 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 05:54:17.885453 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:54:17.885457 | orchestrator | 2026-04-05 05:54:17.885461 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-05 05:54:17.885465 | orchestrator | Sunday 05 April 2026 05:53:59 +0000 (0:00:02.814) 0:40:36.428 ********** 2026-04-05 05:54:17.885468 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885472 | orchestrator | 2026-04-05 05:54:17.885476 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-05 05:54:17.885479 | orchestrator | Sunday 05 April 2026 05:54:00 +0000 (0:00:01.114) 0:40:37.543 ********** 2026-04-05 05:54:17.885483 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885487 | orchestrator | 2026-04-05 05:54:17.885491 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-05 05:54:17.885494 | orchestrator | Sunday 05 April 2026 05:54:01 +0000 (0:00:00.778) 0:40:38.321 ********** 2026-04-05 05:54:17.885498 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885502 | orchestrator | 2026-04-05 05:54:17.885505 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-05 05:54:17.885509 | orchestrator | Sunday 05 April 2026 05:54:02 +0000 (0:00:00.905) 0:40:39.227 ********** 2026-04-05 05:54:17.885513 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-05 05:54:17.885517 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 05:54:17.885520 | orchestrator | 2026-04-05 05:54:17.885524 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:54:17.885528 | orchestrator | Sunday 05 April 2026 05:54:07 +0000 (0:00:04.732) 0:40:43.959 ********** 2026-04-05 05:54:17.885577 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-05 05:54:17.885582 | orchestrator | 2026-04-05 05:54:17.885587 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:54:17.885592 | orchestrator | Sunday 05 April 2026 05:54:08 +0000 (0:00:01.166) 0:40:45.125 ********** 2026-04-05 05:54:17.885596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-05 05:54:17.885601 | orchestrator | 2026-04-05 05:54:17.885605 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:54:17.885610 | orchestrator | Sunday 05 April 2026 05:54:09 +0000 (0:00:01.370) 0:40:46.496 ********** 2026-04-05 05:54:17.885614 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885619 | orchestrator | 2026-04-05 05:54:17.885623 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:54:17.885627 | orchestrator | Sunday 05 April 2026 05:54:10 +0000 (0:00:01.116) 0:40:47.613 ********** 2026-04-05 05:54:17.885632 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885640 | orchestrator | 2026-04-05 05:54:17.885645 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:54:17.885649 | orchestrator | Sunday 05 April 2026 05:54:12 +0000 (0:00:01.483) 0:40:49.097 ********** 2026-04-05 05:54:17.885653 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885658 | orchestrator | 2026-04-05 05:54:17.885662 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:54:17.885666 | orchestrator | Sunday 05 April 2026 05:54:13 +0000 (0:00:01.552) 0:40:50.650 ********** 2026-04-05 05:54:17.885670 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:17.885675 | orchestrator | 2026-04-05 05:54:17.885679 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:54:17.885684 | orchestrator | Sunday 05 April 2026 05:54:15 +0000 (0:00:01.534) 0:40:52.184 ********** 2026-04-05 05:54:17.885688 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885692 | orchestrator | 2026-04-05 05:54:17.885696 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:54:17.885701 | orchestrator | Sunday 05 April 2026 05:54:16 +0000 (0:00:01.111) 0:40:53.296 ********** 2026-04-05 05:54:17.885705 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885710 | orchestrator | 2026-04-05 05:54:17.885714 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:54:17.885718 | orchestrator | Sunday 05 April 2026 05:54:17 +0000 (0:00:01.114) 0:40:54.411 ********** 2026-04-05 05:54:17.885722 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:17.885727 | orchestrator | 2026-04-05 05:54:17.885734 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:54:57.765351 | orchestrator | Sunday 05 April 2026 05:54:18 +0000 (0:00:01.204) 0:40:55.616 ********** 2026-04-05 05:54:57.765493 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.765589 | orchestrator | 2026-04-05 05:54:57.765609 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:54:57.765627 | orchestrator | Sunday 05 April 2026 05:54:20 +0000 (0:00:01.532) 0:40:57.148 ********** 2026-04-05 05:54:57.765643 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.765659 | orchestrator | 2026-04-05 05:54:57.765676 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:54:57.765692 | orchestrator | Sunday 05 April 2026 05:54:21 +0000 (0:00:01.515) 0:40:58.664 ********** 2026-04-05 05:54:57.765709 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.765726 | orchestrator | 2026-04-05 05:54:57.765742 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:54:57.765758 | orchestrator | Sunday 05 April 2026 05:54:22 +0000 (0:00:00.790) 0:40:59.455 ********** 2026-04-05 05:54:57.765773 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.765790 | orchestrator | 2026-04-05 05:54:57.765806 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:54:57.765841 | orchestrator | Sunday 05 April 2026 05:54:23 +0000 (0:00:00.769) 0:41:00.224 ********** 2026-04-05 05:54:57.765859 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.765875 | orchestrator | 2026-04-05 05:54:57.765890 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:54:57.765906 | orchestrator | Sunday 05 April 2026 05:54:24 +0000 (0:00:00.826) 0:41:01.050 ********** 2026-04-05 05:54:57.765922 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.765938 | orchestrator | 2026-04-05 05:54:57.765954 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:54:57.765970 | orchestrator | Sunday 05 April 2026 05:54:25 +0000 (0:00:00.809) 0:41:01.860 ********** 2026-04-05 05:54:57.765986 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.766002 | orchestrator | 2026-04-05 05:54:57.766087 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:54:57.766107 | orchestrator | Sunday 05 April 2026 05:54:25 +0000 (0:00:00.811) 0:41:02.671 ********** 2026-04-05 05:54:57.766124 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766143 | orchestrator | 2026-04-05 05:54:57.766194 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:54:57.766213 | orchestrator | Sunday 05 April 2026 05:54:26 +0000 (0:00:00.812) 0:41:03.484 ********** 2026-04-05 05:54:57.766231 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766248 | orchestrator | 2026-04-05 05:54:57.766266 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:54:57.766283 | orchestrator | Sunday 05 April 2026 05:54:27 +0000 (0:00:00.806) 0:41:04.290 ********** 2026-04-05 05:54:57.766303 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766325 | orchestrator | 2026-04-05 05:54:57.766345 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:54:57.766365 | orchestrator | Sunday 05 April 2026 05:54:28 +0000 (0:00:00.779) 0:41:05.070 ********** 2026-04-05 05:54:57.766383 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.766438 | orchestrator | 2026-04-05 05:54:57.766458 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:54:57.766498 | orchestrator | Sunday 05 April 2026 05:54:29 +0000 (0:00:00.790) 0:41:05.860 ********** 2026-04-05 05:54:57.766539 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.766578 | orchestrator | 2026-04-05 05:54:57.766595 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:54:57.766610 | orchestrator | Sunday 05 April 2026 05:54:29 +0000 (0:00:00.805) 0:41:06.666 ********** 2026-04-05 05:54:57.766648 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766663 | orchestrator | 2026-04-05 05:54:57.766679 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:54:57.766695 | orchestrator | Sunday 05 April 2026 05:54:30 +0000 (0:00:00.820) 0:41:07.486 ********** 2026-04-05 05:54:57.766735 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766752 | orchestrator | 2026-04-05 05:54:57.766768 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:54:57.766784 | orchestrator | Sunday 05 April 2026 05:54:31 +0000 (0:00:00.780) 0:41:08.267 ********** 2026-04-05 05:54:57.766800 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766817 | orchestrator | 2026-04-05 05:54:57.766832 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:54:57.766848 | orchestrator | Sunday 05 April 2026 05:54:32 +0000 (0:00:00.748) 0:41:09.016 ********** 2026-04-05 05:54:57.766864 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766880 | orchestrator | 2026-04-05 05:54:57.766896 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:54:57.766912 | orchestrator | Sunday 05 April 2026 05:54:33 +0000 (0:00:00.763) 0:41:09.779 ********** 2026-04-05 05:54:57.766927 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.766943 | orchestrator | 2026-04-05 05:54:57.766959 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:54:57.766976 | orchestrator | Sunday 05 April 2026 05:54:33 +0000 (0:00:00.832) 0:41:10.612 ********** 2026-04-05 05:54:57.766994 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767010 | orchestrator | 2026-04-05 05:54:57.767026 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:54:57.767042 | orchestrator | Sunday 05 April 2026 05:54:34 +0000 (0:00:00.777) 0:41:11.389 ********** 2026-04-05 05:54:57.767058 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767073 | orchestrator | 2026-04-05 05:54:57.767087 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:54:57.767100 | orchestrator | Sunday 05 April 2026 05:54:35 +0000 (0:00:00.794) 0:41:12.184 ********** 2026-04-05 05:54:57.767113 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767126 | orchestrator | 2026-04-05 05:54:57.767139 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:54:57.767152 | orchestrator | Sunday 05 April 2026 05:54:36 +0000 (0:00:00.783) 0:41:12.967 ********** 2026-04-05 05:54:57.767188 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767218 | orchestrator | 2026-04-05 05:54:57.767330 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:54:57.767348 | orchestrator | Sunday 05 April 2026 05:54:37 +0000 (0:00:00.803) 0:41:13.770 ********** 2026-04-05 05:54:57.767361 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767374 | orchestrator | 2026-04-05 05:54:57.767386 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:54:57.767399 | orchestrator | Sunday 05 April 2026 05:54:37 +0000 (0:00:00.810) 0:41:14.581 ********** 2026-04-05 05:54:57.767413 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767426 | orchestrator | 2026-04-05 05:54:57.767439 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:54:57.767453 | orchestrator | Sunday 05 April 2026 05:54:38 +0000 (0:00:00.778) 0:41:15.360 ********** 2026-04-05 05:54:57.767466 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767478 | orchestrator | 2026-04-05 05:54:57.767492 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:54:57.767539 | orchestrator | Sunday 05 April 2026 05:54:39 +0000 (0:00:00.809) 0:41:16.169 ********** 2026-04-05 05:54:57.767552 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.767565 | orchestrator | 2026-04-05 05:54:57.767589 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:54:57.767602 | orchestrator | Sunday 05 April 2026 05:54:41 +0000 (0:00:01.560) 0:41:17.730 ********** 2026-04-05 05:54:57.767615 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.767628 | orchestrator | 2026-04-05 05:54:57.767641 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:54:57.767654 | orchestrator | Sunday 05 April 2026 05:54:42 +0000 (0:00:01.836) 0:41:19.566 ********** 2026-04-05 05:54:57.767667 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-05 05:54:57.767681 | orchestrator | 2026-04-05 05:54:57.767694 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:54:57.767707 | orchestrator | Sunday 05 April 2026 05:54:43 +0000 (0:00:01.113) 0:41:20.680 ********** 2026-04-05 05:54:57.767720 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767734 | orchestrator | 2026-04-05 05:54:57.767747 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:54:57.767782 | orchestrator | Sunday 05 April 2026 05:54:45 +0000 (0:00:01.168) 0:41:21.849 ********** 2026-04-05 05:54:57.767796 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767809 | orchestrator | 2026-04-05 05:54:57.767821 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:54:57.767834 | orchestrator | Sunday 05 April 2026 05:54:46 +0000 (0:00:01.196) 0:41:23.045 ********** 2026-04-05 05:54:57.767847 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:54:57.767859 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:54:57.767873 | orchestrator | 2026-04-05 05:54:57.767887 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:54:57.767900 | orchestrator | Sunday 05 April 2026 05:54:48 +0000 (0:00:01.827) 0:41:24.873 ********** 2026-04-05 05:54:57.767912 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.767925 | orchestrator | 2026-04-05 05:54:57.767937 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:54:57.767950 | orchestrator | Sunday 05 April 2026 05:54:49 +0000 (0:00:01.532) 0:41:26.405 ********** 2026-04-05 05:54:57.767964 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.767977 | orchestrator | 2026-04-05 05:54:57.767990 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:54:57.768003 | orchestrator | Sunday 05 April 2026 05:54:50 +0000 (0:00:01.217) 0:41:27.623 ********** 2026-04-05 05:54:57.768015 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.768028 | orchestrator | 2026-04-05 05:54:57.768041 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:54:57.768069 | orchestrator | Sunday 05 April 2026 05:54:51 +0000 (0:00:00.809) 0:41:28.433 ********** 2026-04-05 05:54:57.768081 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.768093 | orchestrator | 2026-04-05 05:54:57.768106 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:54:57.768118 | orchestrator | Sunday 05 April 2026 05:54:52 +0000 (0:00:00.782) 0:41:29.216 ********** 2026-04-05 05:54:57.768131 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-05 05:54:57.768143 | orchestrator | 2026-04-05 05:54:57.768154 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:54:57.768167 | orchestrator | Sunday 05 April 2026 05:54:53 +0000 (0:00:01.122) 0:41:30.339 ********** 2026-04-05 05:54:57.768180 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:54:57.768192 | orchestrator | 2026-04-05 05:54:57.768205 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:54:57.768218 | orchestrator | Sunday 05 April 2026 05:54:55 +0000 (0:00:01.714) 0:41:32.054 ********** 2026-04-05 05:54:57.768231 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:54:57.768243 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:54:57.768255 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:54:57.768266 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.768277 | orchestrator | 2026-04-05 05:54:57.768288 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:54:57.768299 | orchestrator | Sunday 05 April 2026 05:54:56 +0000 (0:00:01.197) 0:41:33.252 ********** 2026-04-05 05:54:57.768310 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:54:57.768321 | orchestrator | 2026-04-05 05:54:57.768333 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:54:57.768345 | orchestrator | Sunday 05 April 2026 05:54:57 +0000 (0:00:01.131) 0:41:34.384 ********** 2026-04-05 05:54:57.768372 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.345553 | orchestrator | 2026-04-05 05:55:41.345693 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 05:55:41.345714 | orchestrator | Sunday 05 April 2026 05:54:58 +0000 (0:00:01.193) 0:41:35.577 ********** 2026-04-05 05:55:41.345726 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.345738 | orchestrator | 2026-04-05 05:55:41.345750 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 05:55:41.345760 | orchestrator | Sunday 05 April 2026 05:55:00 +0000 (0:00:01.211) 0:41:36.789 ********** 2026-04-05 05:55:41.345771 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.345782 | orchestrator | 2026-04-05 05:55:41.345792 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 05:55:41.345803 | orchestrator | Sunday 05 April 2026 05:55:01 +0000 (0:00:01.381) 0:41:38.170 ********** 2026-04-05 05:55:41.345814 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.345824 | orchestrator | 2026-04-05 05:55:41.345835 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 05:55:41.345846 | orchestrator | Sunday 05 April 2026 05:55:02 +0000 (0:00:00.820) 0:41:38.991 ********** 2026-04-05 05:55:41.345856 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:55:41.345868 | orchestrator | 2026-04-05 05:55:41.345895 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 05:55:41.345906 | orchestrator | Sunday 05 April 2026 05:55:04 +0000 (0:00:02.192) 0:41:41.183 ********** 2026-04-05 05:55:41.345917 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:55:41.345928 | orchestrator | 2026-04-05 05:55:41.345939 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 05:55:41.345949 | orchestrator | Sunday 05 April 2026 05:55:05 +0000 (0:00:00.868) 0:41:42.052 ********** 2026-04-05 05:55:41.345960 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-05 05:55:41.346007 | orchestrator | 2026-04-05 05:55:41.346112 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 05:55:41.346133 | orchestrator | Sunday 05 April 2026 05:55:06 +0000 (0:00:01.127) 0:41:43.179 ********** 2026-04-05 05:55:41.346152 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346170 | orchestrator | 2026-04-05 05:55:41.346189 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 05:55:41.346209 | orchestrator | Sunday 05 April 2026 05:55:07 +0000 (0:00:01.112) 0:41:44.292 ********** 2026-04-05 05:55:41.346228 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346247 | orchestrator | 2026-04-05 05:55:41.346264 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 05:55:41.346277 | orchestrator | Sunday 05 April 2026 05:55:08 +0000 (0:00:01.166) 0:41:45.459 ********** 2026-04-05 05:55:41.346289 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346302 | orchestrator | 2026-04-05 05:55:41.346314 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 05:55:41.346326 | orchestrator | Sunday 05 April 2026 05:55:09 +0000 (0:00:01.120) 0:41:46.580 ********** 2026-04-05 05:55:41.346339 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346351 | orchestrator | 2026-04-05 05:55:41.346363 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 05:55:41.346373 | orchestrator | Sunday 05 April 2026 05:55:11 +0000 (0:00:01.164) 0:41:47.745 ********** 2026-04-05 05:55:41.346384 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346394 | orchestrator | 2026-04-05 05:55:41.346405 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 05:55:41.346415 | orchestrator | Sunday 05 April 2026 05:55:12 +0000 (0:00:01.116) 0:41:48.861 ********** 2026-04-05 05:55:41.346426 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346437 | orchestrator | 2026-04-05 05:55:41.346448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 05:55:41.346458 | orchestrator | Sunday 05 April 2026 05:55:13 +0000 (0:00:01.129) 0:41:49.990 ********** 2026-04-05 05:55:41.346538 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346553 | orchestrator | 2026-04-05 05:55:41.346564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 05:55:41.346574 | orchestrator | Sunday 05 April 2026 05:55:14 +0000 (0:00:01.159) 0:41:51.150 ********** 2026-04-05 05:55:41.346585 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.346595 | orchestrator | 2026-04-05 05:55:41.346606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 05:55:41.346617 | orchestrator | Sunday 05 April 2026 05:55:15 +0000 (0:00:01.206) 0:41:52.357 ********** 2026-04-05 05:55:41.346627 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:55:41.346638 | orchestrator | 2026-04-05 05:55:41.346648 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 05:55:41.346659 | orchestrator | Sunday 05 April 2026 05:55:16 +0000 (0:00:00.789) 0:41:53.146 ********** 2026-04-05 05:55:41.346670 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-05 05:55:41.346682 | orchestrator | 2026-04-05 05:55:41.346693 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 05:55:41.346703 | orchestrator | Sunday 05 April 2026 05:55:17 +0000 (0:00:01.104) 0:41:54.251 ********** 2026-04-05 05:55:41.346714 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-05 05:55:41.346724 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-05 05:55:41.346735 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-05 05:55:41.346745 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-05 05:55:41.346756 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-05 05:55:41.346766 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-05 05:55:41.346791 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-05 05:55:41.346802 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-05 05:55:41.346813 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 05:55:41.346843 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 05:55:41.346854 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 05:55:41.346865 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 05:55:41.346876 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 05:55:41.346887 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 05:55:41.346897 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-05 05:55:41.346908 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-05 05:55:41.346919 | orchestrator | 2026-04-05 05:55:41.346929 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 05:55:41.346954 | orchestrator | Sunday 05 April 2026 05:55:23 +0000 (0:00:06.179) 0:42:00.430 ********** 2026-04-05 05:55:41.346975 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-05 05:55:41.346987 | orchestrator | 2026-04-05 05:55:41.346997 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 05:55:41.347016 | orchestrator | Sunday 05 April 2026 05:55:24 +0000 (0:00:01.130) 0:42:01.560 ********** 2026-04-05 05:55:41.347027 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 05:55:41.347038 | orchestrator | 2026-04-05 05:55:41.347049 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 05:55:41.347060 | orchestrator | Sunday 05 April 2026 05:55:26 +0000 (0:00:01.553) 0:42:03.114 ********** 2026-04-05 05:55:41.347071 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 05:55:41.347081 | orchestrator | 2026-04-05 05:55:41.347092 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 05:55:41.347102 | orchestrator | Sunday 05 April 2026 05:55:28 +0000 (0:00:01.664) 0:42:04.778 ********** 2026-04-05 05:55:41.347113 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347124 | orchestrator | 2026-04-05 05:55:41.347134 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 05:55:41.347145 | orchestrator | Sunday 05 April 2026 05:55:28 +0000 (0:00:00.811) 0:42:05.590 ********** 2026-04-05 05:55:41.347155 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347166 | orchestrator | 2026-04-05 05:55:41.347182 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 05:55:41.347201 | orchestrator | Sunday 05 April 2026 05:55:29 +0000 (0:00:00.786) 0:42:06.376 ********** 2026-04-05 05:55:41.347220 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347237 | orchestrator | 2026-04-05 05:55:41.347256 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 05:55:41.347273 | orchestrator | Sunday 05 April 2026 05:55:30 +0000 (0:00:00.813) 0:42:07.190 ********** 2026-04-05 05:55:41.347290 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347308 | orchestrator | 2026-04-05 05:55:41.347328 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 05:55:41.347346 | orchestrator | Sunday 05 April 2026 05:55:31 +0000 (0:00:00.945) 0:42:08.135 ********** 2026-04-05 05:55:41.347363 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347383 | orchestrator | 2026-04-05 05:55:41.347401 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 05:55:41.347420 | orchestrator | Sunday 05 April 2026 05:55:32 +0000 (0:00:00.794) 0:42:08.930 ********** 2026-04-05 05:55:41.347439 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347458 | orchestrator | 2026-04-05 05:55:41.347523 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 05:55:41.347538 | orchestrator | Sunday 05 April 2026 05:55:32 +0000 (0:00:00.781) 0:42:09.711 ********** 2026-04-05 05:55:41.347549 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347560 | orchestrator | 2026-04-05 05:55:41.347570 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 05:55:41.347581 | orchestrator | Sunday 05 April 2026 05:55:33 +0000 (0:00:00.824) 0:42:10.536 ********** 2026-04-05 05:55:41.347592 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347602 | orchestrator | 2026-04-05 05:55:41.347613 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 05:55:41.347624 | orchestrator | Sunday 05 April 2026 05:55:34 +0000 (0:00:00.838) 0:42:11.374 ********** 2026-04-05 05:55:41.347634 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347645 | orchestrator | 2026-04-05 05:55:41.347656 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 05:55:41.347666 | orchestrator | Sunday 05 April 2026 05:55:35 +0000 (0:00:00.780) 0:42:12.154 ********** 2026-04-05 05:55:41.347677 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:55:41.347688 | orchestrator | 2026-04-05 05:55:41.347699 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 05:55:41.347709 | orchestrator | Sunday 05 April 2026 05:55:36 +0000 (0:00:00.790) 0:42:12.945 ********** 2026-04-05 05:55:41.347720 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:55:41.347730 | orchestrator | 2026-04-05 05:55:41.347741 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 05:55:41.347751 | orchestrator | Sunday 05 April 2026 05:55:37 +0000 (0:00:00.844) 0:42:13.789 ********** 2026-04-05 05:55:41.347762 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-05 05:55:41.347772 | orchestrator | 2026-04-05 05:55:41.347783 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 05:55:41.347793 | orchestrator | Sunday 05 April 2026 05:55:41 +0000 (0:00:04.151) 0:42:17.940 ********** 2026-04-05 05:55:41.347815 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 05:56:23.349109 | orchestrator | 2026-04-05 05:56:23.349226 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 05:56:23.349243 | orchestrator | Sunday 05 April 2026 05:55:42 +0000 (0:00:00.828) 0:42:18.769 ********** 2026-04-05 05:56:23.349257 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-05 05:56:23.349289 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-05 05:56:23.349303 | orchestrator | 2026-04-05 05:56:23.349315 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 05:56:23.349326 | orchestrator | Sunday 05 April 2026 05:55:49 +0000 (0:00:07.260) 0:42:26.029 ********** 2026-04-05 05:56:23.349338 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.349350 | orchestrator | 2026-04-05 05:56:23.349362 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 05:56:23.349374 | orchestrator | Sunday 05 April 2026 05:55:50 +0000 (0:00:00.800) 0:42:26.830 ********** 2026-04-05 05:56:23.349386 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.349398 | orchestrator | 2026-04-05 05:56:23.349410 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:56:23.349500 | orchestrator | Sunday 05 April 2026 05:55:50 +0000 (0:00:00.779) 0:42:27.610 ********** 2026-04-05 05:56:23.349515 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.349526 | orchestrator | 2026-04-05 05:56:23.349537 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:56:23.349548 | orchestrator | Sunday 05 April 2026 05:55:51 +0000 (0:00:01.032) 0:42:28.643 ********** 2026-04-05 05:56:23.349558 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.349571 | orchestrator | 2026-04-05 05:56:23.349591 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:56:23.349612 | orchestrator | Sunday 05 April 2026 05:55:52 +0000 (0:00:00.802) 0:42:29.445 ********** 2026-04-05 05:56:23.349644 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.349662 | orchestrator | 2026-04-05 05:56:23.349680 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:56:23.349700 | orchestrator | Sunday 05 April 2026 05:55:53 +0000 (0:00:00.800) 0:42:30.246 ********** 2026-04-05 05:56:23.349717 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:56:23.349736 | orchestrator | 2026-04-05 05:56:23.349756 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:56:23.349774 | orchestrator | Sunday 05 April 2026 05:55:54 +0000 (0:00:00.899) 0:42:31.145 ********** 2026-04-05 05:56:23.349794 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 05:56:23.349814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 05:56:23.349833 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 05:56:23.349852 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.349864 | orchestrator | 2026-04-05 05:56:23.349875 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:56:23.349886 | orchestrator | Sunday 05 April 2026 05:55:55 +0000 (0:00:01.092) 0:42:32.238 ********** 2026-04-05 05:56:23.349897 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 05:56:23.349907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 05:56:23.349918 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 05:56:23.349928 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.349939 | orchestrator | 2026-04-05 05:56:23.349949 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:56:23.349960 | orchestrator | Sunday 05 April 2026 05:55:56 +0000 (0:00:01.074) 0:42:33.312 ********** 2026-04-05 05:56:23.349971 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 05:56:23.349981 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 05:56:23.349992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 05:56:23.350002 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.350091 | orchestrator | 2026-04-05 05:56:23.350106 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:56:23.350117 | orchestrator | Sunday 05 April 2026 05:55:57 +0000 (0:00:01.113) 0:42:34.426 ********** 2026-04-05 05:56:23.350128 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:56:23.350139 | orchestrator | 2026-04-05 05:56:23.350150 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:56:23.350160 | orchestrator | Sunday 05 April 2026 05:55:58 +0000 (0:00:00.820) 0:42:35.247 ********** 2026-04-05 05:56:23.350171 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 05:56:23.350182 | orchestrator | 2026-04-05 05:56:23.350193 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 05:56:23.350204 | orchestrator | Sunday 05 April 2026 05:55:59 +0000 (0:00:01.043) 0:42:36.290 ********** 2026-04-05 05:56:23.350214 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:56:23.350225 | orchestrator | 2026-04-05 05:56:23.350236 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-05 05:56:23.350260 | orchestrator | Sunday 05 April 2026 05:56:01 +0000 (0:00:01.478) 0:42:37.769 ********** 2026-04-05 05:56:23.350270 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:56:23.350281 | orchestrator | 2026-04-05 05:56:23.350313 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-05 05:56:23.350325 | orchestrator | Sunday 05 April 2026 05:56:01 +0000 (0:00:00.854) 0:42:38.623 ********** 2026-04-05 05:56:23.350335 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:56:23.350347 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:56:23.350358 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:56:23.350368 | orchestrator | 2026-04-05 05:56:23.350379 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-05 05:56:23.350390 | orchestrator | Sunday 05 April 2026 05:56:03 +0000 (0:00:01.775) 0:42:40.399 ********** 2026-04-05 05:56:23.350400 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-04-05 05:56:23.350411 | orchestrator | 2026-04-05 05:56:23.350422 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-05 05:56:23.350481 | orchestrator | Sunday 05 April 2026 05:56:04 +0000 (0:00:01.255) 0:42:41.655 ********** 2026-04-05 05:56:23.350495 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.350506 | orchestrator | 2026-04-05 05:56:23.350516 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-05 05:56:23.350527 | orchestrator | Sunday 05 April 2026 05:56:06 +0000 (0:00:01.142) 0:42:42.797 ********** 2026-04-05 05:56:23.350537 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.350548 | orchestrator | 2026-04-05 05:56:23.350559 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-05 05:56:23.350569 | orchestrator | Sunday 05 April 2026 05:56:07 +0000 (0:00:01.154) 0:42:43.952 ********** 2026-04-05 05:56:23.350580 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:56:23.350591 | orchestrator | 2026-04-05 05:56:23.350601 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-05 05:56:23.350612 | orchestrator | Sunday 05 April 2026 05:56:08 +0000 (0:00:01.431) 0:42:45.383 ********** 2026-04-05 05:56:23.350622 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:56:23.350633 | orchestrator | 2026-04-05 05:56:23.350644 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-05 05:56:23.350656 | orchestrator | Sunday 05 April 2026 05:56:09 +0000 (0:00:01.227) 0:42:46.611 ********** 2026-04-05 05:56:23.350675 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 05:56:23.350704 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 05:56:23.350723 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 05:56:23.350741 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 05:56:23.350758 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 05:56:23.350776 | orchestrator | 2026-04-05 05:56:23.350792 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-05 05:56:23.350810 | orchestrator | Sunday 05 April 2026 05:56:12 +0000 (0:00:02.522) 0:42:49.134 ********** 2026-04-05 05:56:23.350829 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.350846 | orchestrator | 2026-04-05 05:56:23.350865 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-05 05:56:23.350884 | orchestrator | Sunday 05 April 2026 05:56:13 +0000 (0:00:00.810) 0:42:49.944 ********** 2026-04-05 05:56:23.350902 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-04-05 05:56:23.350920 | orchestrator | 2026-04-05 05:56:23.350934 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-05 05:56:23.350956 | orchestrator | Sunday 05 April 2026 05:56:14 +0000 (0:00:01.113) 0:42:51.058 ********** 2026-04-05 05:56:23.350967 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 05:56:23.350978 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-05 05:56:23.350988 | orchestrator | 2026-04-05 05:56:23.350999 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-05 05:56:23.351010 | orchestrator | Sunday 05 April 2026 05:56:16 +0000 (0:00:01.862) 0:42:52.921 ********** 2026-04-05 05:56:23.351020 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 05:56:23.351032 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 05:56:23.351050 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 05:56:23.351077 | orchestrator | 2026-04-05 05:56:23.351097 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-05 05:56:23.351114 | orchestrator | Sunday 05 April 2026 05:56:19 +0000 (0:00:03.163) 0:42:56.085 ********** 2026-04-05 05:56:23.351132 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 05:56:23.351150 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 05:56:23.351167 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:56:23.351185 | orchestrator | 2026-04-05 05:56:23.351203 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-05 05:56:23.351223 | orchestrator | Sunday 05 April 2026 05:56:21 +0000 (0:00:01.659) 0:42:57.744 ********** 2026-04-05 05:56:23.351241 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.351258 | orchestrator | 2026-04-05 05:56:23.351269 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-05 05:56:23.351280 | orchestrator | Sunday 05 April 2026 05:56:22 +0000 (0:00:01.290) 0:42:59.035 ********** 2026-04-05 05:56:23.351290 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.351301 | orchestrator | 2026-04-05 05:56:23.351312 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-05 05:56:23.351323 | orchestrator | Sunday 05 April 2026 05:56:23 +0000 (0:00:00.842) 0:42:59.877 ********** 2026-04-05 05:56:23.351333 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:56:23.351344 | orchestrator | 2026-04-05 05:56:23.351366 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-05 05:57:30.104594 | orchestrator | Sunday 05 April 2026 05:56:23 +0000 (0:00:00.805) 0:43:00.682 ********** 2026-04-05 05:57:30.104707 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-04-05 05:57:30.104720 | orchestrator | 2026-04-05 05:57:30.104730 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-05 05:57:30.104738 | orchestrator | Sunday 05 April 2026 05:56:25 +0000 (0:00:01.116) 0:43:01.799 ********** 2026-04-05 05:57:30.104746 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:57:30.104755 | orchestrator | 2026-04-05 05:57:30.104764 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-05 05:57:30.104772 | orchestrator | Sunday 05 April 2026 05:56:26 +0000 (0:00:01.491) 0:43:03.290 ********** 2026-04-05 05:57:30.104780 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:57:30.104787 | orchestrator | 2026-04-05 05:57:30.104795 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-05 05:57:30.104803 | orchestrator | Sunday 05 April 2026 05:56:29 +0000 (0:00:03.333) 0:43:06.623 ********** 2026-04-05 05:57:30.104826 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-04-05 05:57:30.104834 | orchestrator | 2026-04-05 05:57:30.104842 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-05 05:57:30.104850 | orchestrator | Sunday 05 April 2026 05:56:31 +0000 (0:00:01.197) 0:43:07.821 ********** 2026-04-05 05:57:30.104858 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:57:30.104865 | orchestrator | 2026-04-05 05:57:30.104873 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-05 05:57:30.104881 | orchestrator | Sunday 05 April 2026 05:56:33 +0000 (0:00:01.971) 0:43:09.792 ********** 2026-04-05 05:57:30.104911 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:57:30.104920 | orchestrator | 2026-04-05 05:57:30.104929 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-05 05:57:30.104938 | orchestrator | Sunday 05 April 2026 05:56:35 +0000 (0:00:01.949) 0:43:11.742 ********** 2026-04-05 05:57:30.104946 | orchestrator | ok: [testbed-node-4] 2026-04-05 05:57:30.104955 | orchestrator | 2026-04-05 05:57:30.104963 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-05 05:57:30.104972 | orchestrator | Sunday 05 April 2026 05:56:37 +0000 (0:00:02.295) 0:43:14.037 ********** 2026-04-05 05:57:30.104980 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:57:30.104990 | orchestrator | 2026-04-05 05:57:30.104998 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-05 05:57:30.105007 | orchestrator | Sunday 05 April 2026 05:56:38 +0000 (0:00:01.151) 0:43:15.188 ********** 2026-04-05 05:57:30.105015 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:57:30.105024 | orchestrator | 2026-04-05 05:57:30.105032 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-05 05:57:30.105041 | orchestrator | Sunday 05 April 2026 05:56:39 +0000 (0:00:01.325) 0:43:16.514 ********** 2026-04-05 05:57:30.105050 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-05 05:57:30.105058 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 05:57:30.105067 | orchestrator | 2026-04-05 05:57:30.105075 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-05 05:57:30.105084 | orchestrator | Sunday 05 April 2026 05:56:41 +0000 (0:00:01.799) 0:43:18.313 ********** 2026-04-05 05:57:30.105092 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-05 05:57:30.105101 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 05:57:30.105109 | orchestrator | 2026-04-05 05:57:30.105118 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-05 05:57:30.105127 | orchestrator | Sunday 05 April 2026 05:56:44 +0000 (0:00:02.828) 0:43:21.141 ********** 2026-04-05 05:57:30.105138 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-05 05:57:30.105148 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-05 05:57:30.105158 | orchestrator | 2026-04-05 05:57:30.105168 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-05 05:57:30.105178 | orchestrator | Sunday 05 April 2026 05:56:48 +0000 (0:00:04.063) 0:43:25.205 ********** 2026-04-05 05:57:30.105188 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:57:30.105197 | orchestrator | 2026-04-05 05:57:30.105208 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-05 05:57:30.105218 | orchestrator | Sunday 05 April 2026 05:56:49 +0000 (0:00:00.888) 0:43:26.094 ********** 2026-04-05 05:57:30.105228 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:57:30.105243 | orchestrator | 2026-04-05 05:57:30.105257 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-05 05:57:30.105270 | orchestrator | Sunday 05 April 2026 05:56:50 +0000 (0:00:00.908) 0:43:27.002 ********** 2026-04-05 05:57:30.105285 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:57:30.105299 | orchestrator | 2026-04-05 05:57:30.105312 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-05 05:57:30.105327 | orchestrator | Sunday 05 April 2026 05:56:51 +0000 (0:00:00.922) 0:43:27.924 ********** 2026-04-05 05:57:30.105345 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:57:30.105360 | orchestrator | 2026-04-05 05:57:30.105374 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-05 05:57:30.105389 | orchestrator | Sunday 05 April 2026 05:56:51 +0000 (0:00:00.782) 0:43:28.707 ********** 2026-04-05 05:57:30.105433 | orchestrator | skipping: [testbed-node-4] 2026-04-05 05:57:30.105449 | orchestrator | 2026-04-05 05:57:30.105465 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-05 05:57:30.105480 | orchestrator | Sunday 05 April 2026 05:56:52 +0000 (0:00:00.816) 0:43:29.524 ********** 2026-04-05 05:57:30.105506 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-05 05:57:30.105517 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-05 05:57:30.105526 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-05 05:57:30.105552 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-05 05:57:30.105562 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-04-05 05:57:30.105570 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:57:30.105579 | orchestrator | 2026-04-05 05:57:30.105588 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-05 05:57:30.105596 | orchestrator | 2026-04-05 05:57:30.105605 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 05:57:30.105613 | orchestrator | Sunday 05 April 2026 05:57:09 +0000 (0:00:16.696) 0:43:46.220 ********** 2026-04-05 05:57:30.105622 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-05 05:57:30.105630 | orchestrator | 2026-04-05 05:57:30.105639 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 05:57:30.105654 | orchestrator | Sunday 05 April 2026 05:57:10 +0000 (0:00:01.326) 0:43:47.547 ********** 2026-04-05 05:57:30.105663 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.105672 | orchestrator | 2026-04-05 05:57:30.105680 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 05:57:30.105689 | orchestrator | Sunday 05 April 2026 05:57:12 +0000 (0:00:01.481) 0:43:49.028 ********** 2026-04-05 05:57:30.105697 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.105706 | orchestrator | 2026-04-05 05:57:30.105714 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 05:57:30.105723 | orchestrator | Sunday 05 April 2026 05:57:13 +0000 (0:00:01.136) 0:43:50.165 ********** 2026-04-05 05:57:30.105731 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.105740 | orchestrator | 2026-04-05 05:57:30.105748 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 05:57:30.105757 | orchestrator | Sunday 05 April 2026 05:57:14 +0000 (0:00:01.411) 0:43:51.577 ********** 2026-04-05 05:57:30.105765 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.105774 | orchestrator | 2026-04-05 05:57:30.105783 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 05:57:30.105791 | orchestrator | Sunday 05 April 2026 05:57:15 +0000 (0:00:01.132) 0:43:52.709 ********** 2026-04-05 05:57:30.105799 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.105808 | orchestrator | 2026-04-05 05:57:30.105816 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 05:57:30.105825 | orchestrator | Sunday 05 April 2026 05:57:17 +0000 (0:00:01.194) 0:43:53.904 ********** 2026-04-05 05:57:30.105833 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.105842 | orchestrator | 2026-04-05 05:57:30.105850 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 05:57:30.105859 | orchestrator | Sunday 05 April 2026 05:57:18 +0000 (0:00:01.147) 0:43:55.052 ********** 2026-04-05 05:57:30.105867 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:30.105876 | orchestrator | 2026-04-05 05:57:30.105884 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 05:57:30.105893 | orchestrator | Sunday 05 April 2026 05:57:19 +0000 (0:00:01.118) 0:43:56.170 ********** 2026-04-05 05:57:30.105901 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.105910 | orchestrator | 2026-04-05 05:57:30.105918 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 05:57:30.105927 | orchestrator | Sunday 05 April 2026 05:57:20 +0000 (0:00:01.130) 0:43:57.301 ********** 2026-04-05 05:57:30.105935 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:57:30.105950 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:57:30.105958 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:57:30.105967 | orchestrator | 2026-04-05 05:57:30.105975 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 05:57:30.105984 | orchestrator | Sunday 05 April 2026 05:57:22 +0000 (0:00:02.111) 0:43:59.412 ********** 2026-04-05 05:57:30.105992 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:30.106001 | orchestrator | 2026-04-05 05:57:30.106009 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 05:57:30.106070 | orchestrator | Sunday 05 April 2026 05:57:23 +0000 (0:00:01.253) 0:44:00.666 ********** 2026-04-05 05:57:30.106080 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:57:30.106089 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:57:30.106097 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:57:30.106106 | orchestrator | 2026-04-05 05:57:30.106114 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 05:57:30.106127 | orchestrator | Sunday 05 April 2026 05:57:27 +0000 (0:00:03.471) 0:44:04.138 ********** 2026-04-05 05:57:30.106142 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 05:57:30.106156 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 05:57:30.106171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 05:57:30.106186 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:30.106200 | orchestrator | 2026-04-05 05:57:30.106212 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 05:57:30.106221 | orchestrator | Sunday 05 April 2026 05:57:29 +0000 (0:00:01.881) 0:44:06.020 ********** 2026-04-05 05:57:30.106231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 05:57:30.106250 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 05:57:51.724638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 05:57:51.724741 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.724758 | orchestrator | 2026-04-05 05:57:51.724769 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 05:57:51.724779 | orchestrator | Sunday 05 April 2026 05:57:31 +0000 (0:00:02.197) 0:44:08.218 ********** 2026-04-05 05:57:51.724803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:51.724816 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:51.724844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:51.724854 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.724863 | orchestrator | 2026-04-05 05:57:51.724872 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 05:57:51.724881 | orchestrator | Sunday 05 April 2026 05:57:32 +0000 (0:00:01.168) 0:44:09.387 ********** 2026-04-05 05:57:51.724892 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 05:57:24.499498', 'end': '2026-04-05 05:57:24.533445', 'delta': '0:00:00.033947', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 05:57:51.724905 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 05:57:25.644555', 'end': '2026-04-05 05:57:25.705104', 'delta': '0:00:00.060549', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 05:57:51.724929 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 05:57:26.219973', 'end': '2026-04-05 05:57:26.263675', 'delta': '0:00:00.043702', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 05:57:51.724939 | orchestrator | 2026-04-05 05:57:51.724948 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 05:57:51.724957 | orchestrator | Sunday 05 April 2026 05:57:33 +0000 (0:00:01.252) 0:44:10.640 ********** 2026-04-05 05:57:51.724965 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:51.724975 | orchestrator | 2026-04-05 05:57:51.724984 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 05:57:51.724992 | orchestrator | Sunday 05 April 2026 05:57:35 +0000 (0:00:01.333) 0:44:11.973 ********** 2026-04-05 05:57:51.725001 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.725009 | orchestrator | 2026-04-05 05:57:51.725018 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 05:57:51.725031 | orchestrator | Sunday 05 April 2026 05:57:36 +0000 (0:00:01.278) 0:44:13.252 ********** 2026-04-05 05:57:51.725040 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:51.725055 | orchestrator | 2026-04-05 05:57:51.725064 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 05:57:51.725072 | orchestrator | Sunday 05 April 2026 05:57:37 +0000 (0:00:01.212) 0:44:14.464 ********** 2026-04-05 05:57:51.725081 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 05:57:51.725090 | orchestrator | 2026-04-05 05:57:51.725098 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:57:51.725107 | orchestrator | Sunday 05 April 2026 05:57:39 +0000 (0:00:02.028) 0:44:16.493 ********** 2026-04-05 05:57:51.725115 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:51.725124 | orchestrator | 2026-04-05 05:57:51.725132 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 05:57:51.725141 | orchestrator | Sunday 05 April 2026 05:57:40 +0000 (0:00:01.192) 0:44:17.686 ********** 2026-04-05 05:57:51.725150 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.725158 | orchestrator | 2026-04-05 05:57:51.725167 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 05:57:51.725175 | orchestrator | Sunday 05 April 2026 05:57:42 +0000 (0:00:01.112) 0:44:18.798 ********** 2026-04-05 05:57:51.725185 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.725196 | orchestrator | 2026-04-05 05:57:51.725206 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 05:57:51.725216 | orchestrator | Sunday 05 April 2026 05:57:43 +0000 (0:00:01.222) 0:44:20.020 ********** 2026-04-05 05:57:51.725226 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.725235 | orchestrator | 2026-04-05 05:57:51.725246 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 05:57:51.725255 | orchestrator | Sunday 05 April 2026 05:57:44 +0000 (0:00:01.136) 0:44:21.157 ********** 2026-04-05 05:57:51.725264 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.725272 | orchestrator | 2026-04-05 05:57:51.725281 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 05:57:51.725289 | orchestrator | Sunday 05 April 2026 05:57:45 +0000 (0:00:01.172) 0:44:22.329 ********** 2026-04-05 05:57:51.725298 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:51.725306 | orchestrator | 2026-04-05 05:57:51.725315 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 05:57:51.725323 | orchestrator | Sunday 05 April 2026 05:57:46 +0000 (0:00:01.202) 0:44:23.531 ********** 2026-04-05 05:57:51.725332 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.725340 | orchestrator | 2026-04-05 05:57:51.725349 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 05:57:51.725357 | orchestrator | Sunday 05 April 2026 05:57:48 +0000 (0:00:01.220) 0:44:24.751 ********** 2026-04-05 05:57:51.725366 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:51.725375 | orchestrator | 2026-04-05 05:57:51.725383 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 05:57:51.725425 | orchestrator | Sunday 05 April 2026 05:57:49 +0000 (0:00:01.193) 0:44:25.945 ********** 2026-04-05 05:57:51.725434 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:51.725442 | orchestrator | 2026-04-05 05:57:51.725451 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 05:57:51.725461 | orchestrator | Sunday 05 April 2026 05:57:50 +0000 (0:00:01.156) 0:44:27.102 ********** 2026-04-05 05:57:51.725470 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:51.725479 | orchestrator | 2026-04-05 05:57:51.725487 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 05:57:51.725496 | orchestrator | Sunday 05 April 2026 05:57:51 +0000 (0:00:01.193) 0:44:28.296 ********** 2026-04-05 05:57:51.725505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:51.725526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}})  2026-04-05 05:57:51.852340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:57:51.852497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}})  2026-04-05 05:57:51.852516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:51.852531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:51.852543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 05:57:51.852556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:51.852587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:57:51.852617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:51.852635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}})  2026-04-05 05:57:51.852648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}})  2026-04-05 05:57:51.852660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:51.852684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 05:57:53.187694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:53.187839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 05:57:53.187869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 05:57:53.187892 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:53.187911 | orchestrator | 2026-04-05 05:57:53.187929 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 05:57:53.187947 | orchestrator | Sunday 05 April 2026 05:57:52 +0000 (0:00:01.401) 0:44:29.698 ********** 2026-04-05 05:57:53.187967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.187987 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.188035 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.188089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.188113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.188131 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.188150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.188178 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:53.188207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608906 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 05:57:58.608986 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:57:58.608991 | orchestrator | 2026-04-05 05:57:58.608996 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 05:57:58.609001 | orchestrator | Sunday 05 April 2026 05:57:54 +0000 (0:00:01.448) 0:44:31.146 ********** 2026-04-05 05:57:58.609004 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:58.609009 | orchestrator | 2026-04-05 05:57:58.609013 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 05:57:58.609017 | orchestrator | Sunday 05 April 2026 05:57:55 +0000 (0:00:01.523) 0:44:32.670 ********** 2026-04-05 05:57:58.609021 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:58.609025 | orchestrator | 2026-04-05 05:57:58.609028 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:57:58.609032 | orchestrator | Sunday 05 April 2026 05:57:57 +0000 (0:00:01.199) 0:44:33.869 ********** 2026-04-05 05:57:58.609036 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:57:58.609040 | orchestrator | 2026-04-05 05:57:58.609043 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:57:58.609049 | orchestrator | Sunday 05 April 2026 05:57:58 +0000 (0:00:01.451) 0:44:35.321 ********** 2026-04-05 05:58:40.761995 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762156 | orchestrator | 2026-04-05 05:58:40.762174 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 05:58:40.762202 | orchestrator | Sunday 05 April 2026 05:57:59 +0000 (0:00:01.138) 0:44:36.460 ********** 2026-04-05 05:58:40.762214 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762225 | orchestrator | 2026-04-05 05:58:40.762236 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 05:58:40.762247 | orchestrator | Sunday 05 April 2026 05:58:01 +0000 (0:00:01.278) 0:44:37.739 ********** 2026-04-05 05:58:40.762258 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762268 | orchestrator | 2026-04-05 05:58:40.762279 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 05:58:40.762290 | orchestrator | Sunday 05 April 2026 05:58:02 +0000 (0:00:01.138) 0:44:38.877 ********** 2026-04-05 05:58:40.762302 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 05:58:40.762313 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 05:58:40.762323 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 05:58:40.762334 | orchestrator | 2026-04-05 05:58:40.762345 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 05:58:40.762356 | orchestrator | Sunday 05 April 2026 05:58:04 +0000 (0:00:02.086) 0:44:40.964 ********** 2026-04-05 05:58:40.762432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 05:58:40.762446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 05:58:40.762456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 05:58:40.762467 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762478 | orchestrator | 2026-04-05 05:58:40.762488 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 05:58:40.762499 | orchestrator | Sunday 05 April 2026 05:58:05 +0000 (0:00:01.300) 0:44:42.264 ********** 2026-04-05 05:58:40.762511 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-05 05:58:40.762523 | orchestrator | 2026-04-05 05:58:40.762536 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 05:58:40.762550 | orchestrator | Sunday 05 April 2026 05:58:06 +0000 (0:00:01.173) 0:44:43.438 ********** 2026-04-05 05:58:40.762563 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762575 | orchestrator | 2026-04-05 05:58:40.762587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 05:58:40.762600 | orchestrator | Sunday 05 April 2026 05:58:07 +0000 (0:00:01.165) 0:44:44.603 ********** 2026-04-05 05:58:40.762613 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762626 | orchestrator | 2026-04-05 05:58:40.762638 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 05:58:40.762650 | orchestrator | Sunday 05 April 2026 05:58:09 +0000 (0:00:01.210) 0:44:45.813 ********** 2026-04-05 05:58:40.762663 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762675 | orchestrator | 2026-04-05 05:58:40.762688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 05:58:40.762700 | orchestrator | Sunday 05 April 2026 05:58:10 +0000 (0:00:01.305) 0:44:47.119 ********** 2026-04-05 05:58:40.762713 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.762726 | orchestrator | 2026-04-05 05:58:40.762740 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 05:58:40.762751 | orchestrator | Sunday 05 April 2026 05:58:11 +0000 (0:00:01.256) 0:44:48.376 ********** 2026-04-05 05:58:40.762762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 05:58:40.762773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 05:58:40.762783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 05:58:40.762794 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762804 | orchestrator | 2026-04-05 05:58:40.762815 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 05:58:40.762826 | orchestrator | Sunday 05 April 2026 05:58:13 +0000 (0:00:01.431) 0:44:49.807 ********** 2026-04-05 05:58:40.762837 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 05:58:40.762847 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 05:58:40.762858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 05:58:40.762868 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762879 | orchestrator | 2026-04-05 05:58:40.762890 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 05:58:40.762900 | orchestrator | Sunday 05 April 2026 05:58:14 +0000 (0:00:01.386) 0:44:51.194 ********** 2026-04-05 05:58:40.762911 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 05:58:40.762922 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 05:58:40.762932 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 05:58:40.762943 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.762953 | orchestrator | 2026-04-05 05:58:40.762964 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 05:58:40.762975 | orchestrator | Sunday 05 April 2026 05:58:15 +0000 (0:00:01.428) 0:44:52.622 ********** 2026-04-05 05:58:40.762993 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.763004 | orchestrator | 2026-04-05 05:58:40.763015 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 05:58:40.763025 | orchestrator | Sunday 05 April 2026 05:58:17 +0000 (0:00:01.150) 0:44:53.773 ********** 2026-04-05 05:58:40.763036 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 05:58:40.763046 | orchestrator | 2026-04-05 05:58:40.763057 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 05:58:40.763068 | orchestrator | Sunday 05 April 2026 05:58:18 +0000 (0:00:01.321) 0:44:55.095 ********** 2026-04-05 05:58:40.763094 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:58:40.763106 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:58:40.763122 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:58:40.763133 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:58:40.763150 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:58:40.763169 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-05 05:58:40.763189 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:58:40.763209 | orchestrator | 2026-04-05 05:58:40.763229 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 05:58:40.763249 | orchestrator | Sunday 05 April 2026 05:58:20 +0000 (0:00:02.205) 0:44:57.300 ********** 2026-04-05 05:58:40.763270 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 05:58:40.763289 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 05:58:40.763308 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 05:58:40.763324 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 05:58:40.763335 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 05:58:40.763345 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-05 05:58:40.763356 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 05:58:40.763394 | orchestrator | 2026-04-05 05:58:40.763412 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-05 05:58:40.763432 | orchestrator | Sunday 05 April 2026 05:58:23 +0000 (0:00:02.803) 0:45:00.103 ********** 2026-04-05 05:58:40.763444 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.763454 | orchestrator | 2026-04-05 05:58:40.763465 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-05 05:58:40.763476 | orchestrator | Sunday 05 April 2026 05:58:24 +0000 (0:00:01.144) 0:45:01.248 ********** 2026-04-05 05:58:40.763486 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.763497 | orchestrator | 2026-04-05 05:58:40.763508 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-05 05:58:40.763518 | orchestrator | Sunday 05 April 2026 05:58:25 +0000 (0:00:00.793) 0:45:02.042 ********** 2026-04-05 05:58:40.763528 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.763539 | orchestrator | 2026-04-05 05:58:40.763550 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-05 05:58:40.763560 | orchestrator | Sunday 05 April 2026 05:58:26 +0000 (0:00:00.898) 0:45:02.941 ********** 2026-04-05 05:58:40.763571 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 05:58:40.763582 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 05:58:40.763592 | orchestrator | 2026-04-05 05:58:40.763603 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 05:58:40.763614 | orchestrator | Sunday 05 April 2026 05:58:30 +0000 (0:00:03.846) 0:45:06.787 ********** 2026-04-05 05:58:40.763634 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-05 05:58:40.763645 | orchestrator | 2026-04-05 05:58:40.763656 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 05:58:40.763666 | orchestrator | Sunday 05 April 2026 05:58:31 +0000 (0:00:01.111) 0:45:07.899 ********** 2026-04-05 05:58:40.763677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-05 05:58:40.763688 | orchestrator | 2026-04-05 05:58:40.763698 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 05:58:40.763709 | orchestrator | Sunday 05 April 2026 05:58:32 +0000 (0:00:01.115) 0:45:09.015 ********** 2026-04-05 05:58:40.763720 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.763730 | orchestrator | 2026-04-05 05:58:40.763741 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 05:58:40.763751 | orchestrator | Sunday 05 April 2026 05:58:33 +0000 (0:00:01.135) 0:45:10.151 ********** 2026-04-05 05:58:40.763762 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.763773 | orchestrator | 2026-04-05 05:58:40.763783 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 05:58:40.763794 | orchestrator | Sunday 05 April 2026 05:58:34 +0000 (0:00:01.547) 0:45:11.699 ********** 2026-04-05 05:58:40.763804 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.763815 | orchestrator | 2026-04-05 05:58:40.763826 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 05:58:40.763837 | orchestrator | Sunday 05 April 2026 05:58:36 +0000 (0:00:01.546) 0:45:13.245 ********** 2026-04-05 05:58:40.763847 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:58:40.763858 | orchestrator | 2026-04-05 05:58:40.763868 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 05:58:40.763879 | orchestrator | Sunday 05 April 2026 05:58:38 +0000 (0:00:01.535) 0:45:14.781 ********** 2026-04-05 05:58:40.763890 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.763900 | orchestrator | 2026-04-05 05:58:40.763911 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 05:58:40.763922 | orchestrator | Sunday 05 April 2026 05:58:39 +0000 (0:00:01.411) 0:45:16.192 ********** 2026-04-05 05:58:40.763932 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.763943 | orchestrator | 2026-04-05 05:58:40.763953 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 05:58:40.763964 | orchestrator | Sunday 05 April 2026 05:58:40 +0000 (0:00:01.132) 0:45:17.325 ********** 2026-04-05 05:58:40.763975 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:58:40.763985 | orchestrator | 2026-04-05 05:58:40.764006 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 05:59:21.337604 | orchestrator | Sunday 05 April 2026 05:58:41 +0000 (0:00:01.223) 0:45:18.548 ********** 2026-04-05 05:59:21.337725 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.337743 | orchestrator | 2026-04-05 05:59:21.337772 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 05:59:21.337784 | orchestrator | Sunday 05 April 2026 05:58:43 +0000 (0:00:01.554) 0:45:20.103 ********** 2026-04-05 05:59:21.337795 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.337806 | orchestrator | 2026-04-05 05:59:21.337817 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 05:59:21.337828 | orchestrator | Sunday 05 April 2026 05:58:44 +0000 (0:00:01.546) 0:45:21.650 ********** 2026-04-05 05:59:21.337839 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.337850 | orchestrator | 2026-04-05 05:59:21.337861 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 05:59:21.337872 | orchestrator | Sunday 05 April 2026 05:58:45 +0000 (0:00:00.824) 0:45:22.474 ********** 2026-04-05 05:59:21.337882 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.337893 | orchestrator | 2026-04-05 05:59:21.337904 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 05:59:21.337940 | orchestrator | Sunday 05 April 2026 05:58:46 +0000 (0:00:00.780) 0:45:23.255 ********** 2026-04-05 05:59:21.337951 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.337962 | orchestrator | 2026-04-05 05:59:21.337973 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 05:59:21.337984 | orchestrator | Sunday 05 April 2026 05:58:47 +0000 (0:00:00.828) 0:45:24.083 ********** 2026-04-05 05:59:21.337997 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.338010 | orchestrator | 2026-04-05 05:59:21.338078 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 05:59:21.338098 | orchestrator | Sunday 05 April 2026 05:58:48 +0000 (0:00:00.843) 0:45:24.927 ********** 2026-04-05 05:59:21.338115 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.338132 | orchestrator | 2026-04-05 05:59:21.338149 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 05:59:21.338166 | orchestrator | Sunday 05 April 2026 05:58:49 +0000 (0:00:00.828) 0:45:25.756 ********** 2026-04-05 05:59:21.338185 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338202 | orchestrator | 2026-04-05 05:59:21.338220 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 05:59:21.338237 | orchestrator | Sunday 05 April 2026 05:58:49 +0000 (0:00:00.759) 0:45:26.515 ********** 2026-04-05 05:59:21.338254 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338270 | orchestrator | 2026-04-05 05:59:21.338287 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 05:59:21.338304 | orchestrator | Sunday 05 April 2026 05:58:50 +0000 (0:00:00.878) 0:45:27.394 ********** 2026-04-05 05:59:21.338320 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338337 | orchestrator | 2026-04-05 05:59:21.338387 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 05:59:21.338406 | orchestrator | Sunday 05 April 2026 05:58:51 +0000 (0:00:00.772) 0:45:28.166 ********** 2026-04-05 05:59:21.338423 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.338442 | orchestrator | 2026-04-05 05:59:21.338460 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 05:59:21.338479 | orchestrator | Sunday 05 April 2026 05:58:52 +0000 (0:00:00.880) 0:45:29.047 ********** 2026-04-05 05:59:21.338497 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.338515 | orchestrator | 2026-04-05 05:59:21.338532 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 05:59:21.338547 | orchestrator | Sunday 05 April 2026 05:58:53 +0000 (0:00:00.852) 0:45:29.900 ********** 2026-04-05 05:59:21.338566 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338584 | orchestrator | 2026-04-05 05:59:21.338603 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 05:59:21.338621 | orchestrator | Sunday 05 April 2026 05:58:54 +0000 (0:00:00.834) 0:45:30.734 ********** 2026-04-05 05:59:21.338640 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338659 | orchestrator | 2026-04-05 05:59:21.338678 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 05:59:21.338696 | orchestrator | Sunday 05 April 2026 05:58:54 +0000 (0:00:00.808) 0:45:31.543 ********** 2026-04-05 05:59:21.338715 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338726 | orchestrator | 2026-04-05 05:59:21.338739 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 05:59:21.338759 | orchestrator | Sunday 05 April 2026 05:58:55 +0000 (0:00:00.833) 0:45:32.377 ********** 2026-04-05 05:59:21.338793 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338813 | orchestrator | 2026-04-05 05:59:21.338831 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 05:59:21.338850 | orchestrator | Sunday 05 April 2026 05:58:56 +0000 (0:00:00.786) 0:45:33.164 ********** 2026-04-05 05:59:21.338867 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338886 | orchestrator | 2026-04-05 05:59:21.338897 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 05:59:21.338923 | orchestrator | Sunday 05 April 2026 05:58:57 +0000 (0:00:00.790) 0:45:33.954 ********** 2026-04-05 05:59:21.338933 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338944 | orchestrator | 2026-04-05 05:59:21.338955 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 05:59:21.338965 | orchestrator | Sunday 05 April 2026 05:58:58 +0000 (0:00:00.776) 0:45:34.731 ********** 2026-04-05 05:59:21.338976 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.338987 | orchestrator | 2026-04-05 05:59:21.338998 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 05:59:21.339030 | orchestrator | Sunday 05 April 2026 05:58:58 +0000 (0:00:00.745) 0:45:35.476 ********** 2026-04-05 05:59:21.339042 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339053 | orchestrator | 2026-04-05 05:59:21.339063 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 05:59:21.339074 | orchestrator | Sunday 05 April 2026 05:58:59 +0000 (0:00:00.794) 0:45:36.271 ********** 2026-04-05 05:59:21.339108 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339119 | orchestrator | 2026-04-05 05:59:21.339130 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 05:59:21.339159 | orchestrator | Sunday 05 April 2026 05:59:00 +0000 (0:00:00.795) 0:45:37.066 ********** 2026-04-05 05:59:21.339171 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339182 | orchestrator | 2026-04-05 05:59:21.339192 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 05:59:21.339203 | orchestrator | Sunday 05 April 2026 05:59:01 +0000 (0:00:00.757) 0:45:37.824 ********** 2026-04-05 05:59:21.339213 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339224 | orchestrator | 2026-04-05 05:59:21.339234 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 05:59:21.339245 | orchestrator | Sunday 05 April 2026 05:59:02 +0000 (0:00:00.903) 0:45:38.727 ********** 2026-04-05 05:59:21.339256 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339266 | orchestrator | 2026-04-05 05:59:21.339277 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 05:59:21.339287 | orchestrator | Sunday 05 April 2026 05:59:02 +0000 (0:00:00.776) 0:45:39.504 ********** 2026-04-05 05:59:21.339298 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.339308 | orchestrator | 2026-04-05 05:59:21.339319 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 05:59:21.339330 | orchestrator | Sunday 05 April 2026 05:59:04 +0000 (0:00:01.604) 0:45:41.108 ********** 2026-04-05 05:59:21.339340 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.339374 | orchestrator | 2026-04-05 05:59:21.339385 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 05:59:21.339396 | orchestrator | Sunday 05 April 2026 05:59:06 +0000 (0:00:01.899) 0:45:43.008 ********** 2026-04-05 05:59:21.339407 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-05 05:59:21.339419 | orchestrator | 2026-04-05 05:59:21.339429 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 05:59:21.339440 | orchestrator | Sunday 05 April 2026 05:59:07 +0000 (0:00:01.166) 0:45:44.175 ********** 2026-04-05 05:59:21.339450 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339461 | orchestrator | 2026-04-05 05:59:21.339471 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 05:59:21.339482 | orchestrator | Sunday 05 April 2026 05:59:08 +0000 (0:00:01.242) 0:45:45.417 ********** 2026-04-05 05:59:21.339493 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339503 | orchestrator | 2026-04-05 05:59:21.339514 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 05:59:21.339524 | orchestrator | Sunday 05 April 2026 05:59:09 +0000 (0:00:01.134) 0:45:46.554 ********** 2026-04-05 05:59:21.339535 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 05:59:21.339556 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 05:59:21.339567 | orchestrator | 2026-04-05 05:59:21.339578 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 05:59:21.339588 | orchestrator | Sunday 05 April 2026 05:59:11 +0000 (0:00:01.827) 0:45:48.382 ********** 2026-04-05 05:59:21.339599 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.339610 | orchestrator | 2026-04-05 05:59:21.339620 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 05:59:21.339631 | orchestrator | Sunday 05 April 2026 05:59:13 +0000 (0:00:01.426) 0:45:49.809 ********** 2026-04-05 05:59:21.339641 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339652 | orchestrator | 2026-04-05 05:59:21.339662 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 05:59:21.339673 | orchestrator | Sunday 05 April 2026 05:59:14 +0000 (0:00:01.154) 0:45:50.964 ********** 2026-04-05 05:59:21.339683 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339694 | orchestrator | 2026-04-05 05:59:21.339739 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 05:59:21.339751 | orchestrator | Sunday 05 April 2026 05:59:15 +0000 (0:00:00.886) 0:45:51.850 ********** 2026-04-05 05:59:21.339761 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339772 | orchestrator | 2026-04-05 05:59:21.339782 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 05:59:21.339793 | orchestrator | Sunday 05 April 2026 05:59:16 +0000 (0:00:00.896) 0:45:52.747 ********** 2026-04-05 05:59:21.339820 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-05 05:59:21.339831 | orchestrator | 2026-04-05 05:59:21.339842 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 05:59:21.339852 | orchestrator | Sunday 05 April 2026 05:59:17 +0000 (0:00:01.179) 0:45:53.927 ********** 2026-04-05 05:59:21.339863 | orchestrator | ok: [testbed-node-5] 2026-04-05 05:59:21.339873 | orchestrator | 2026-04-05 05:59:21.339884 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 05:59:21.339895 | orchestrator | Sunday 05 April 2026 05:59:18 +0000 (0:00:01.700) 0:45:55.628 ********** 2026-04-05 05:59:21.339906 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 05:59:21.339916 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 05:59:21.339927 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 05:59:21.339937 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339948 | orchestrator | 2026-04-05 05:59:21.339958 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 05:59:21.339969 | orchestrator | Sunday 05 April 2026 05:59:20 +0000 (0:00:01.193) 0:45:56.822 ********** 2026-04-05 05:59:21.339980 | orchestrator | skipping: [testbed-node-5] 2026-04-05 05:59:21.339990 | orchestrator | 2026-04-05 05:59:21.340001 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 05:59:21.340011 | orchestrator | Sunday 05 April 2026 05:59:21 +0000 (0:00:01.134) 0:45:57.956 ********** 2026-04-05 05:59:21.340031 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.198752 | orchestrator | 2026-04-05 06:00:05.198847 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 06:00:05.198873 | orchestrator | Sunday 05 April 2026 05:59:22 +0000 (0:00:01.160) 0:45:59.117 ********** 2026-04-05 06:00:05.198882 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.198891 | orchestrator | 2026-04-05 06:00:05.198900 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 06:00:05.198908 | orchestrator | Sunday 05 April 2026 05:59:23 +0000 (0:00:01.129) 0:46:00.247 ********** 2026-04-05 06:00:05.198916 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.198924 | orchestrator | 2026-04-05 06:00:05.198932 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 06:00:05.198959 | orchestrator | Sunday 05 April 2026 05:59:24 +0000 (0:00:01.249) 0:46:01.496 ********** 2026-04-05 06:00:05.198967 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.198975 | orchestrator | 2026-04-05 06:00:05.198983 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 06:00:05.198991 | orchestrator | Sunday 05 April 2026 05:59:25 +0000 (0:00:00.831) 0:46:02.327 ********** 2026-04-05 06:00:05.198999 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:05.199007 | orchestrator | 2026-04-05 06:00:05.199016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 06:00:05.199024 | orchestrator | Sunday 05 April 2026 05:59:27 +0000 (0:00:02.160) 0:46:04.488 ********** 2026-04-05 06:00:05.199034 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:05.199048 | orchestrator | 2026-04-05 06:00:05.199065 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 06:00:05.199087 | orchestrator | Sunday 05 April 2026 05:59:28 +0000 (0:00:00.807) 0:46:05.296 ********** 2026-04-05 06:00:05.199101 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-05 06:00:05.199115 | orchestrator | 2026-04-05 06:00:05.199131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 06:00:05.199146 | orchestrator | Sunday 05 April 2026 05:59:29 +0000 (0:00:01.098) 0:46:06.395 ********** 2026-04-05 06:00:05.199162 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199177 | orchestrator | 2026-04-05 06:00:05.199188 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 06:00:05.199196 | orchestrator | Sunday 05 April 2026 05:59:31 +0000 (0:00:01.363) 0:46:07.758 ********** 2026-04-05 06:00:05.199204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199212 | orchestrator | 2026-04-05 06:00:05.199220 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 06:00:05.199228 | orchestrator | Sunday 05 April 2026 05:59:32 +0000 (0:00:01.211) 0:46:08.969 ********** 2026-04-05 06:00:05.199235 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199243 | orchestrator | 2026-04-05 06:00:05.199251 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 06:00:05.199259 | orchestrator | Sunday 05 April 2026 05:59:33 +0000 (0:00:01.164) 0:46:10.134 ********** 2026-04-05 06:00:05.199266 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199274 | orchestrator | 2026-04-05 06:00:05.199282 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 06:00:05.199290 | orchestrator | Sunday 05 April 2026 05:59:34 +0000 (0:00:01.126) 0:46:11.261 ********** 2026-04-05 06:00:05.199298 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199308 | orchestrator | 2026-04-05 06:00:05.199318 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 06:00:05.199360 | orchestrator | Sunday 05 April 2026 05:59:35 +0000 (0:00:01.199) 0:46:12.460 ********** 2026-04-05 06:00:05.199370 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199379 | orchestrator | 2026-04-05 06:00:05.199389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 06:00:05.199399 | orchestrator | Sunday 05 April 2026 05:59:36 +0000 (0:00:01.183) 0:46:13.644 ********** 2026-04-05 06:00:05.199409 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199419 | orchestrator | 2026-04-05 06:00:05.199428 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 06:00:05.199437 | orchestrator | Sunday 05 April 2026 05:59:38 +0000 (0:00:01.234) 0:46:14.879 ********** 2026-04-05 06:00:05.199447 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199456 | orchestrator | 2026-04-05 06:00:05.199467 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 06:00:05.199477 | orchestrator | Sunday 05 April 2026 05:59:39 +0000 (0:00:01.217) 0:46:16.096 ********** 2026-04-05 06:00:05.199487 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:05.199497 | orchestrator | 2026-04-05 06:00:05.199514 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 06:00:05.199524 | orchestrator | Sunday 05 April 2026 05:59:40 +0000 (0:00:00.863) 0:46:16.960 ********** 2026-04-05 06:00:05.199534 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-05 06:00:05.199544 | orchestrator | 2026-04-05 06:00:05.199554 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 06:00:05.199564 | orchestrator | Sunday 05 April 2026 05:59:41 +0000 (0:00:01.152) 0:46:18.112 ********** 2026-04-05 06:00:05.199573 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-05 06:00:05.199583 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-05 06:00:05.199593 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-05 06:00:05.199603 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-05 06:00:05.199614 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-05 06:00:05.199624 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-05 06:00:05.199634 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-05 06:00:05.199644 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-05 06:00:05.199654 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 06:00:05.199677 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 06:00:05.199686 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 06:00:05.199699 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 06:00:05.199707 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 06:00:05.199715 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 06:00:05.199723 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-05 06:00:05.199731 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-05 06:00:05.199738 | orchestrator | 2026-04-05 06:00:05.199746 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 06:00:05.199754 | orchestrator | Sunday 05 April 2026 05:59:47 +0000 (0:00:06.093) 0:46:24.206 ********** 2026-04-05 06:00:05.199762 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-05 06:00:05.199770 | orchestrator | 2026-04-05 06:00:05.199778 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 06:00:05.199785 | orchestrator | Sunday 05 April 2026 05:59:48 +0000 (0:00:01.299) 0:46:25.506 ********** 2026-04-05 06:00:05.199794 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:00:05.199803 | orchestrator | 2026-04-05 06:00:05.199810 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 06:00:05.199818 | orchestrator | Sunday 05 April 2026 05:59:50 +0000 (0:00:01.496) 0:46:27.002 ********** 2026-04-05 06:00:05.199826 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:00:05.199840 | orchestrator | 2026-04-05 06:00:05.199853 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 06:00:05.199866 | orchestrator | Sunday 05 April 2026 05:59:51 +0000 (0:00:01.676) 0:46:28.678 ********** 2026-04-05 06:00:05.199879 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199891 | orchestrator | 2026-04-05 06:00:05.199903 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 06:00:05.199916 | orchestrator | Sunday 05 April 2026 05:59:52 +0000 (0:00:00.836) 0:46:29.515 ********** 2026-04-05 06:00:05.199930 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199944 | orchestrator | 2026-04-05 06:00:05.199958 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 06:00:05.199971 | orchestrator | Sunday 05 April 2026 05:59:53 +0000 (0:00:00.774) 0:46:30.290 ********** 2026-04-05 06:00:05.199990 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.199998 | orchestrator | 2026-04-05 06:00:05.200006 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 06:00:05.200014 | orchestrator | Sunday 05 April 2026 05:59:54 +0000 (0:00:00.843) 0:46:31.134 ********** 2026-04-05 06:00:05.200021 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.200029 | orchestrator | 2026-04-05 06:00:05.200037 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 06:00:05.200045 | orchestrator | Sunday 05 April 2026 05:59:55 +0000 (0:00:00.798) 0:46:31.932 ********** 2026-04-05 06:00:05.200053 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.200060 | orchestrator | 2026-04-05 06:00:05.200068 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 06:00:05.200076 | orchestrator | Sunday 05 April 2026 05:59:55 +0000 (0:00:00.769) 0:46:32.702 ********** 2026-04-05 06:00:05.200084 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.200092 | orchestrator | 2026-04-05 06:00:05.200099 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 06:00:05.200107 | orchestrator | Sunday 05 April 2026 05:59:56 +0000 (0:00:00.765) 0:46:33.467 ********** 2026-04-05 06:00:05.200115 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.200123 | orchestrator | 2026-04-05 06:00:05.200131 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 06:00:05.200139 | orchestrator | Sunday 05 April 2026 05:59:57 +0000 (0:00:00.802) 0:46:34.270 ********** 2026-04-05 06:00:05.200146 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.200154 | orchestrator | 2026-04-05 06:00:05.200162 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 06:00:05.200169 | orchestrator | Sunday 05 April 2026 05:59:58 +0000 (0:00:00.771) 0:46:35.042 ********** 2026-04-05 06:00:05.200177 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.200185 | orchestrator | 2026-04-05 06:00:05.200193 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 06:00:05.200201 | orchestrator | Sunday 05 April 2026 05:59:59 +0000 (0:00:00.783) 0:46:35.825 ********** 2026-04-05 06:00:05.200208 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:05.200216 | orchestrator | 2026-04-05 06:00:05.200224 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 06:00:05.200232 | orchestrator | Sunday 05 April 2026 05:59:59 +0000 (0:00:00.885) 0:46:36.711 ********** 2026-04-05 06:00:05.200239 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:05.200247 | orchestrator | 2026-04-05 06:00:05.200255 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 06:00:05.200263 | orchestrator | Sunday 05 April 2026 06:00:00 +0000 (0:00:00.944) 0:46:37.655 ********** 2026-04-05 06:00:05.200271 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-05 06:00:05.200279 | orchestrator | 2026-04-05 06:00:05.200287 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 06:00:05.200294 | orchestrator | Sunday 05 April 2026 06:00:05 +0000 (0:00:04.140) 0:46:41.796 ********** 2026-04-05 06:00:05.200308 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:00:47.453602 | orchestrator | 2026-04-05 06:00:47.453729 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 06:00:47.453747 | orchestrator | Sunday 05 April 2026 06:00:05 +0000 (0:00:00.820) 0:46:42.617 ********** 2026-04-05 06:00:47.453763 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-05 06:00:47.453811 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-05 06:00:47.453831 | orchestrator | 2026-04-05 06:00:47.453849 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 06:00:47.453866 | orchestrator | Sunday 05 April 2026 06:00:13 +0000 (0:00:07.320) 0:46:49.937 ********** 2026-04-05 06:00:47.453882 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.453901 | orchestrator | 2026-04-05 06:00:47.453913 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 06:00:47.453923 | orchestrator | Sunday 05 April 2026 06:00:13 +0000 (0:00:00.770) 0:46:50.708 ********** 2026-04-05 06:00:47.453932 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.453941 | orchestrator | 2026-04-05 06:00:47.453952 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:00:47.453963 | orchestrator | Sunday 05 April 2026 06:00:14 +0000 (0:00:00.782) 0:46:51.491 ********** 2026-04-05 06:00:47.453972 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.453982 | orchestrator | 2026-04-05 06:00:47.453991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:00:47.454001 | orchestrator | Sunday 05 April 2026 06:00:15 +0000 (0:00:00.843) 0:46:52.335 ********** 2026-04-05 06:00:47.454010 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454083 | orchestrator | 2026-04-05 06:00:47.454094 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:00:47.454103 | orchestrator | Sunday 05 April 2026 06:00:16 +0000 (0:00:00.827) 0:46:53.163 ********** 2026-04-05 06:00:47.454113 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454122 | orchestrator | 2026-04-05 06:00:47.454132 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:00:47.454142 | orchestrator | Sunday 05 April 2026 06:00:17 +0000 (0:00:00.873) 0:46:54.036 ********** 2026-04-05 06:00:47.454154 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:47.454167 | orchestrator | 2026-04-05 06:00:47.454178 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:00:47.454189 | orchestrator | Sunday 05 April 2026 06:00:18 +0000 (0:00:00.918) 0:46:54.955 ********** 2026-04-05 06:00:47.454201 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:00:47.454212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:00:47.454222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:00:47.454234 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454246 | orchestrator | 2026-04-05 06:00:47.454257 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:00:47.454266 | orchestrator | Sunday 05 April 2026 06:00:19 +0000 (0:00:01.522) 0:46:56.477 ********** 2026-04-05 06:00:47.454276 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:00:47.454285 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:00:47.454295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:00:47.454304 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454338 | orchestrator | 2026-04-05 06:00:47.454348 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:00:47.454357 | orchestrator | Sunday 05 April 2026 06:00:21 +0000 (0:00:01.538) 0:46:58.016 ********** 2026-04-05 06:00:47.454368 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:00:47.454377 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:00:47.454387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:00:47.454406 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454416 | orchestrator | 2026-04-05 06:00:47.454426 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:00:47.454435 | orchestrator | Sunday 05 April 2026 06:00:22 +0000 (0:00:01.587) 0:46:59.603 ********** 2026-04-05 06:00:47.454445 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:47.454455 | orchestrator | 2026-04-05 06:00:47.454464 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:00:47.454474 | orchestrator | Sunday 05 April 2026 06:00:23 +0000 (0:00:00.859) 0:47:00.463 ********** 2026-04-05 06:00:47.454483 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 06:00:47.454493 | orchestrator | 2026-04-05 06:00:47.454502 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 06:00:47.454512 | orchestrator | Sunday 05 April 2026 06:00:24 +0000 (0:00:01.030) 0:47:01.494 ********** 2026-04-05 06:00:47.454522 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:47.454531 | orchestrator | 2026-04-05 06:00:47.454541 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-05 06:00:47.454551 | orchestrator | Sunday 05 April 2026 06:00:26 +0000 (0:00:01.419) 0:47:02.913 ********** 2026-04-05 06:00:47.454560 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:47.454570 | orchestrator | 2026-04-05 06:00:47.454597 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-05 06:00:47.454614 | orchestrator | Sunday 05 April 2026 06:00:26 +0000 (0:00:00.780) 0:47:03.693 ********** 2026-04-05 06:00:47.454624 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:00:47.454635 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:00:47.454644 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:00:47.454654 | orchestrator | 2026-04-05 06:00:47.454663 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-05 06:00:47.454673 | orchestrator | Sunday 05 April 2026 06:00:28 +0000 (0:00:01.328) 0:47:05.022 ********** 2026-04-05 06:00:47.454683 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-04-05 06:00:47.454692 | orchestrator | 2026-04-05 06:00:47.454702 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-05 06:00:47.454712 | orchestrator | Sunday 05 April 2026 06:00:29 +0000 (0:00:01.148) 0:47:06.170 ********** 2026-04-05 06:00:47.454722 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454731 | orchestrator | 2026-04-05 06:00:47.454741 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-05 06:00:47.454751 | orchestrator | Sunday 05 April 2026 06:00:30 +0000 (0:00:01.213) 0:47:07.384 ********** 2026-04-05 06:00:47.454760 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454770 | orchestrator | 2026-04-05 06:00:47.454779 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-05 06:00:47.454789 | orchestrator | Sunday 05 April 2026 06:00:31 +0000 (0:00:01.116) 0:47:08.500 ********** 2026-04-05 06:00:47.454798 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:47.454821 | orchestrator | 2026-04-05 06:00:47.454841 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-05 06:00:47.454851 | orchestrator | Sunday 05 April 2026 06:00:33 +0000 (0:00:01.437) 0:47:09.938 ********** 2026-04-05 06:00:47.454861 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:47.454870 | orchestrator | 2026-04-05 06:00:47.454880 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-05 06:00:47.454889 | orchestrator | Sunday 05 April 2026 06:00:34 +0000 (0:00:01.148) 0:47:11.086 ********** 2026-04-05 06:00:47.454899 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-05 06:00:47.454909 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-05 06:00:47.454918 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-05 06:00:47.454935 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-05 06:00:47.454945 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-05 06:00:47.454954 | orchestrator | 2026-04-05 06:00:47.454964 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-05 06:00:47.454974 | orchestrator | Sunday 05 April 2026 06:00:36 +0000 (0:00:02.586) 0:47:13.673 ********** 2026-04-05 06:00:47.454983 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.454993 | orchestrator | 2026-04-05 06:00:47.455002 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-05 06:00:47.455012 | orchestrator | Sunday 05 April 2026 06:00:37 +0000 (0:00:00.776) 0:47:14.450 ********** 2026-04-05 06:00:47.455021 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-04-05 06:00:47.455031 | orchestrator | 2026-04-05 06:00:47.455041 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-05 06:00:47.455050 | orchestrator | Sunday 05 April 2026 06:00:38 +0000 (0:00:01.196) 0:47:15.646 ********** 2026-04-05 06:00:47.455060 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-05 06:00:47.455069 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-05 06:00:47.455079 | orchestrator | 2026-04-05 06:00:47.455088 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-05 06:00:47.455098 | orchestrator | Sunday 05 April 2026 06:00:40 +0000 (0:00:01.920) 0:47:17.567 ********** 2026-04-05 06:00:47.455107 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:00:47.455117 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 06:00:47.455127 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:00:47.455136 | orchestrator | 2026-04-05 06:00:47.455146 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:00:47.455155 | orchestrator | Sunday 05 April 2026 06:00:44 +0000 (0:00:03.185) 0:47:20.752 ********** 2026-04-05 06:00:47.455165 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:00:47.455175 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 06:00:47.455184 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:00:47.455194 | orchestrator | 2026-04-05 06:00:47.455203 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-05 06:00:47.455213 | orchestrator | Sunday 05 April 2026 06:00:45 +0000 (0:00:01.633) 0:47:22.386 ********** 2026-04-05 06:00:47.455222 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.455232 | orchestrator | 2026-04-05 06:00:47.455242 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-05 06:00:47.455251 | orchestrator | Sunday 05 April 2026 06:00:46 +0000 (0:00:00.866) 0:47:23.253 ********** 2026-04-05 06:00:47.455261 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.455270 | orchestrator | 2026-04-05 06:00:47.455280 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-05 06:00:47.455290 | orchestrator | Sunday 05 April 2026 06:00:47 +0000 (0:00:00.772) 0:47:24.025 ********** 2026-04-05 06:00:47.455299 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:00:47.455326 | orchestrator | 2026-04-05 06:00:47.455342 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-05 06:03:05.166375 | orchestrator | Sunday 05 April 2026 06:00:48 +0000 (0:00:00.782) 0:47:24.807 ********** 2026-04-05 06:03:05.166510 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-04-05 06:03:05.166527 | orchestrator | 2026-04-05 06:03:05.166540 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-05 06:03:05.166551 | orchestrator | Sunday 05 April 2026 06:00:49 +0000 (0:00:01.111) 0:47:25.919 ********** 2026-04-05 06:03:05.166562 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:03:05.167391 | orchestrator | 2026-04-05 06:03:05.167412 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-05 06:03:05.167425 | orchestrator | Sunday 05 April 2026 06:00:50 +0000 (0:00:01.546) 0:47:27.466 ********** 2026-04-05 06:03:05.167436 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:03:05.167447 | orchestrator | 2026-04-05 06:03:05.167458 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-05 06:03:05.167469 | orchestrator | Sunday 05 April 2026 06:00:54 +0000 (0:00:03.549) 0:47:31.016 ********** 2026-04-05 06:03:05.167479 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-04-05 06:03:05.167490 | orchestrator | 2026-04-05 06:03:05.167501 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-05 06:03:05.167512 | orchestrator | Sunday 05 April 2026 06:00:55 +0000 (0:00:01.184) 0:47:32.200 ********** 2026-04-05 06:03:05.167522 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:03:05.167533 | orchestrator | 2026-04-05 06:03:05.167544 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-05 06:03:05.167555 | orchestrator | Sunday 05 April 2026 06:00:57 +0000 (0:00:01.996) 0:47:34.197 ********** 2026-04-05 06:03:05.167565 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:03:05.167576 | orchestrator | 2026-04-05 06:03:05.167588 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-05 06:03:05.167599 | orchestrator | Sunday 05 April 2026 06:00:59 +0000 (0:00:01.963) 0:47:36.160 ********** 2026-04-05 06:03:05.167610 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:03:05.167620 | orchestrator | 2026-04-05 06:03:05.167631 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-05 06:03:05.167642 | orchestrator | Sunday 05 April 2026 06:01:01 +0000 (0:00:02.247) 0:47:38.407 ********** 2026-04-05 06:03:05.167653 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:03:05.167664 | orchestrator | 2026-04-05 06:03:05.167675 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-05 06:03:05.167686 | orchestrator | Sunday 05 April 2026 06:01:02 +0000 (0:00:01.129) 0:47:39.537 ********** 2026-04-05 06:03:05.167697 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:03:05.167708 | orchestrator | 2026-04-05 06:03:05.167718 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-05 06:03:05.167729 | orchestrator | Sunday 05 April 2026 06:01:03 +0000 (0:00:01.143) 0:47:40.680 ********** 2026-04-05 06:03:05.167740 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-05 06:03:05.167751 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-05 06:03:05.167762 | orchestrator | 2026-04-05 06:03:05.167772 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-05 06:03:05.167783 | orchestrator | Sunday 05 April 2026 06:01:05 +0000 (0:00:01.876) 0:47:42.556 ********** 2026-04-05 06:03:05.167794 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-05 06:03:05.167804 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-05 06:03:05.167815 | orchestrator | 2026-04-05 06:03:05.167826 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-05 06:03:05.167837 | orchestrator | Sunday 05 April 2026 06:01:08 +0000 (0:00:02.918) 0:47:45.475 ********** 2026-04-05 06:03:05.167847 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-05 06:03:05.167858 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-05 06:03:05.167869 | orchestrator | 2026-04-05 06:03:05.167880 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-05 06:03:05.167890 | orchestrator | Sunday 05 April 2026 06:01:13 +0000 (0:00:04.274) 0:47:49.750 ********** 2026-04-05 06:03:05.167901 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:03:05.167912 | orchestrator | 2026-04-05 06:03:05.167923 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-05 06:03:05.167934 | orchestrator | Sunday 05 April 2026 06:01:13 +0000 (0:00:00.882) 0:47:50.633 ********** 2026-04-05 06:03:05.167944 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-05 06:03:05.167964 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:03:05.167975 | orchestrator | 2026-04-05 06:03:05.167986 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-05 06:03:05.167996 | orchestrator | Sunday 05 April 2026 06:01:27 +0000 (0:00:13.316) 0:48:03.949 ********** 2026-04-05 06:03:05.168007 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:03:05.168018 | orchestrator | 2026-04-05 06:03:05.168029 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-05 06:03:05.168039 | orchestrator | Sunday 05 April 2026 06:01:28 +0000 (0:00:01.454) 0:48:05.404 ********** 2026-04-05 06:03:05.168050 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:03:05.168061 | orchestrator | 2026-04-05 06:03:05.168071 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-05 06:03:05.168082 | orchestrator | Sunday 05 April 2026 06:01:29 +0000 (0:00:00.800) 0:48:06.204 ********** 2026-04-05 06:03:05.168093 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:03:05.168104 | orchestrator | 2026-04-05 06:03:05.168114 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-05 06:03:05.168125 | orchestrator | Sunday 05 April 2026 06:01:30 +0000 (0:00:00.779) 0:48:06.983 ********** 2026-04-05 06:03:05.168136 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:03:05.168146 | orchestrator | 2026-04-05 06:03:05.168157 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-04-05 06:03:05.168168 | orchestrator | 2026-04-05 06:03:05.168198 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:03:05.168217 | orchestrator | Sunday 05 April 2026 06:01:32 +0000 (0:00:02.474) 0:48:09.458 ********** 2026-04-05 06:03:05.168228 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:03:05.168239 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:03:05.168250 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:03:05.168260 | orchestrator | 2026-04-05 06:03:05.168319 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:03:05.168330 | orchestrator | Sunday 05 April 2026 06:01:34 +0000 (0:00:01.652) 0:48:11.111 ********** 2026-04-05 06:03:05.168341 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:03:05.168351 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:03:05.168362 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:03:05.168373 | orchestrator | 2026-04-05 06:03:05.168383 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-04-05 06:03:05.168394 | orchestrator | Sunday 05 April 2026 06:01:36 +0000 (0:00:01.714) 0:48:12.825 ********** 2026-04-05 06:03:05.168405 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-05 06:03:05.168416 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-05 06:03:05.168427 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-05 06:03:05.168438 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-05 06:03:05.168450 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-05 06:03:05.168461 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-05 06:03:05.168471 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-05 06:03:05.168482 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-05 06:03:05.168493 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-05 06:03:05.168503 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-05 06:03:05.168523 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-05 06:03:05.168534 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-05 06:03:05.168544 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-05 06:03:05.168555 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-05 06:03:05.168566 | orchestrator | 2026-04-05 06:03:05.168577 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-04-05 06:03:05.168587 | orchestrator | Sunday 05 April 2026 06:02:48 +0000 (0:01:12.774) 0:49:25.600 ********** 2026-04-05 06:03:05.168598 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-05 06:03:05.168609 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-05 06:03:05.168620 | orchestrator | 2026-04-05 06:03:05.168630 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-04-05 06:03:05.168641 | orchestrator | Sunday 05 April 2026 06:02:54 +0000 (0:00:05.261) 0:49:30.862 ********** 2026-04-05 06:03:05.168652 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:03:05.168662 | orchestrator | 2026-04-05 06:03:05.168673 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-04-05 06:03:05.168684 | orchestrator | 2026-04-05 06:03:05.168694 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:03:05.168705 | orchestrator | Sunday 05 April 2026 06:02:57 +0000 (0:00:03.344) 0:49:34.208 ********** 2026-04-05 06:03:05.168716 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-05 06:03:05.168726 | orchestrator | 2026-04-05 06:03:05.168737 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 06:03:05.168747 | orchestrator | Sunday 05 April 2026 06:02:58 +0000 (0:00:01.162) 0:49:35.370 ********** 2026-04-05 06:03:05.168758 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:05.168769 | orchestrator | 2026-04-05 06:03:05.168779 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 06:03:05.168790 | orchestrator | Sunday 05 April 2026 06:03:00 +0000 (0:00:01.474) 0:49:36.844 ********** 2026-04-05 06:03:05.168801 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:05.168812 | orchestrator | 2026-04-05 06:03:05.168822 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:03:05.168833 | orchestrator | Sunday 05 April 2026 06:03:01 +0000 (0:00:01.135) 0:49:37.980 ********** 2026-04-05 06:03:05.168843 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:05.168854 | orchestrator | 2026-04-05 06:03:05.168865 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:03:05.168875 | orchestrator | Sunday 05 April 2026 06:03:02 +0000 (0:00:01.450) 0:49:39.431 ********** 2026-04-05 06:03:05.168886 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:05.168897 | orchestrator | 2026-04-05 06:03:05.168907 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 06:03:05.168918 | orchestrator | Sunday 05 April 2026 06:03:04 +0000 (0:00:01.290) 0:49:40.722 ********** 2026-04-05 06:03:05.168929 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:05.168939 | orchestrator | 2026-04-05 06:03:05.168950 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 06:03:05.168973 | orchestrator | Sunday 05 April 2026 06:03:05 +0000 (0:00:01.151) 0:49:41.873 ********** 2026-04-05 06:03:30.904217 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:30.904482 | orchestrator | 2026-04-05 06:03:30.904505 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 06:03:30.904519 | orchestrator | Sunday 05 April 2026 06:03:06 +0000 (0:00:01.194) 0:49:43.068 ********** 2026-04-05 06:03:30.904532 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:30.904544 | orchestrator | 2026-04-05 06:03:30.904555 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 06:03:30.904593 | orchestrator | Sunday 05 April 2026 06:03:07 +0000 (0:00:01.154) 0:49:44.222 ********** 2026-04-05 06:03:30.904605 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:30.904616 | orchestrator | 2026-04-05 06:03:30.904627 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 06:03:30.904638 | orchestrator | Sunday 05 April 2026 06:03:08 +0000 (0:00:01.197) 0:49:45.421 ********** 2026-04-05 06:03:30.904649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 06:03:30.904660 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:03:30.904671 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:03:30.904681 | orchestrator | 2026-04-05 06:03:30.904692 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 06:03:30.904703 | orchestrator | Sunday 05 April 2026 06:03:10 +0000 (0:00:01.794) 0:49:47.216 ********** 2026-04-05 06:03:30.904714 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:30.904724 | orchestrator | 2026-04-05 06:03:30.904735 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 06:03:30.904746 | orchestrator | Sunday 05 April 2026 06:03:11 +0000 (0:00:01.264) 0:49:48.480 ********** 2026-04-05 06:03:30.904759 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 06:03:30.904772 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:03:30.904785 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:03:30.904797 | orchestrator | 2026-04-05 06:03:30.904810 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 06:03:30.904822 | orchestrator | Sunday 05 April 2026 06:03:14 +0000 (0:00:02.823) 0:49:51.303 ********** 2026-04-05 06:03:30.904835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 06:03:30.904847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 06:03:30.904860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 06:03:30.904873 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:30.904885 | orchestrator | 2026-04-05 06:03:30.904898 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 06:03:30.904910 | orchestrator | Sunday 05 April 2026 06:03:16 +0000 (0:00:01.544) 0:49:52.849 ********** 2026-04-05 06:03:30.904924 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 06:03:30.904940 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 06:03:30.904954 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 06:03:30.904967 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:30.904980 | orchestrator | 2026-04-05 06:03:30.904993 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 06:03:30.905006 | orchestrator | Sunday 05 April 2026 06:03:18 +0000 (0:00:02.160) 0:49:55.010 ********** 2026-04-05 06:03:30.905021 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:30.905045 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:30.905095 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:30.905108 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:30.905119 | orchestrator | 2026-04-05 06:03:30.905130 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 06:03:30.905142 | orchestrator | Sunday 05 April 2026 06:03:19 +0000 (0:00:01.255) 0:49:56.265 ********** 2026-04-05 06:03:30.905155 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 06:03:12.300544', 'end': '2026-04-05 06:03:12.346356', 'delta': '0:00:00.045812', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 06:03:30.905171 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 06:03:12.872025', 'end': '2026-04-05 06:03:12.918994', 'delta': '0:00:00.046969', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 06:03:30.905183 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 06:03:13.437318', 'end': '2026-04-05 06:03:13.481344', 'delta': '0:00:00.044026', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 06:03:30.905194 | orchestrator | 2026-04-05 06:03:30.905205 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 06:03:30.905215 | orchestrator | Sunday 05 April 2026 06:03:20 +0000 (0:00:01.307) 0:49:57.573 ********** 2026-04-05 06:03:30.905226 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:30.905237 | orchestrator | 2026-04-05 06:03:30.905248 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 06:03:30.905281 | orchestrator | Sunday 05 April 2026 06:03:22 +0000 (0:00:01.313) 0:49:58.886 ********** 2026-04-05 06:03:30.905300 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:30.905311 | orchestrator | 2026-04-05 06:03:30.905321 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 06:03:30.905332 | orchestrator | Sunday 05 April 2026 06:03:24 +0000 (0:00:01.889) 0:50:00.775 ********** 2026-04-05 06:03:30.905343 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:30.905353 | orchestrator | 2026-04-05 06:03:30.905397 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 06:03:30.905421 | orchestrator | Sunday 05 April 2026 06:03:25 +0000 (0:00:01.188) 0:50:01.963 ********** 2026-04-05 06:03:30.905432 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:30.905443 | orchestrator | 2026-04-05 06:03:30.905454 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:03:30.905464 | orchestrator | Sunday 05 April 2026 06:03:27 +0000 (0:00:02.094) 0:50:04.058 ********** 2026-04-05 06:03:30.905475 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:30.905486 | orchestrator | 2026-04-05 06:03:30.905496 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 06:03:30.905507 | orchestrator | Sunday 05 April 2026 06:03:28 +0000 (0:00:01.126) 0:50:05.184 ********** 2026-04-05 06:03:30.905518 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:30.905528 | orchestrator | 2026-04-05 06:03:30.905539 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 06:03:30.905549 | orchestrator | Sunday 05 April 2026 06:03:29 +0000 (0:00:01.137) 0:50:06.321 ********** 2026-04-05 06:03:30.905560 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:30.905571 | orchestrator | 2026-04-05 06:03:30.905581 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:03:30.905605 | orchestrator | Sunday 05 April 2026 06:03:30 +0000 (0:00:01.288) 0:50:07.610 ********** 2026-04-05 06:03:40.390523 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.390636 | orchestrator | 2026-04-05 06:03:40.390655 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 06:03:40.390669 | orchestrator | Sunday 05 April 2026 06:03:32 +0000 (0:00:01.115) 0:50:08.726 ********** 2026-04-05 06:03:40.390681 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.390692 | orchestrator | 2026-04-05 06:03:40.390703 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 06:03:40.390714 | orchestrator | Sunday 05 April 2026 06:03:33 +0000 (0:00:01.172) 0:50:09.899 ********** 2026-04-05 06:03:40.390725 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.390735 | orchestrator | 2026-04-05 06:03:40.390746 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 06:03:40.390757 | orchestrator | Sunday 05 April 2026 06:03:34 +0000 (0:00:01.193) 0:50:11.092 ********** 2026-04-05 06:03:40.390768 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.390778 | orchestrator | 2026-04-05 06:03:40.390789 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 06:03:40.390800 | orchestrator | Sunday 05 April 2026 06:03:35 +0000 (0:00:01.128) 0:50:12.221 ********** 2026-04-05 06:03:40.390810 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.390821 | orchestrator | 2026-04-05 06:03:40.390832 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 06:03:40.390843 | orchestrator | Sunday 05 April 2026 06:03:36 +0000 (0:00:01.128) 0:50:13.350 ********** 2026-04-05 06:03:40.390854 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.390865 | orchestrator | 2026-04-05 06:03:40.390876 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 06:03:40.390887 | orchestrator | Sunday 05 April 2026 06:03:37 +0000 (0:00:01.182) 0:50:14.533 ********** 2026-04-05 06:03:40.390898 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.390909 | orchestrator | 2026-04-05 06:03:40.390919 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 06:03:40.390957 | orchestrator | Sunday 05 April 2026 06:03:38 +0000 (0:00:01.135) 0:50:15.669 ********** 2026-04-05 06:03:40.390971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.390985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.390996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.391010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 06:03:40.391024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.391068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.391083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.391101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:03:40.391128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.391141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:03:40.391156 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:40.391169 | orchestrator | 2026-04-05 06:03:40.391182 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 06:03:40.391194 | orchestrator | Sunday 05 April 2026 06:03:40 +0000 (0:00:01.314) 0:50:16.984 ********** 2026-04-05 06:03:40.391221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.638897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639021 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639085 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639107 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639216 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c31e0cb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c31e0cb-dff3-47f8-8e2c-6fa8d605ef5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639292 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639317 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:03:46.639340 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:03:46.639362 | orchestrator | 2026-04-05 06:03:46.639385 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 06:03:46.639408 | orchestrator | Sunday 05 April 2026 06:03:42 +0000 (0:00:02.180) 0:50:19.165 ********** 2026-04-05 06:03:46.639427 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:46.639447 | orchestrator | 2026-04-05 06:03:46.639461 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 06:03:46.639474 | orchestrator | Sunday 05 April 2026 06:03:43 +0000 (0:00:01.548) 0:50:20.714 ********** 2026-04-05 06:03:46.639487 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:46.639500 | orchestrator | 2026-04-05 06:03:46.639514 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:03:46.639533 | orchestrator | Sunday 05 April 2026 06:03:45 +0000 (0:00:01.149) 0:50:21.863 ********** 2026-04-05 06:03:46.639546 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:03:46.639559 | orchestrator | 2026-04-05 06:03:46.639572 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:03:46.639596 | orchestrator | Sunday 05 April 2026 06:03:46 +0000 (0:00:01.487) 0:50:23.351 ********** 2026-04-05 06:04:40.863227 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:04:40.863369 | orchestrator | 2026-04-05 06:04:40.863384 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:04:40.863395 | orchestrator | Sunday 05 April 2026 06:03:47 +0000 (0:00:01.147) 0:50:24.498 ********** 2026-04-05 06:04:40.863404 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:04:40.863413 | orchestrator | 2026-04-05 06:04:40.863422 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:04:40.863431 | orchestrator | Sunday 05 April 2026 06:03:49 +0000 (0:00:01.239) 0:50:25.738 ********** 2026-04-05 06:04:40.863440 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:04:40.863449 | orchestrator | 2026-04-05 06:04:40.863458 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 06:04:40.863466 | orchestrator | Sunday 05 April 2026 06:03:50 +0000 (0:00:01.197) 0:50:26.936 ********** 2026-04-05 06:04:40.863476 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 06:04:40.863485 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 06:04:40.863494 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 06:04:40.863502 | orchestrator | 2026-04-05 06:04:40.863511 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 06:04:40.863520 | orchestrator | Sunday 05 April 2026 06:03:52 +0000 (0:00:02.105) 0:50:29.041 ********** 2026-04-05 06:04:40.863528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-05 06:04:40.863538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-05 06:04:40.863546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-05 06:04:40.863555 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:04:40.863564 | orchestrator | 2026-04-05 06:04:40.863573 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 06:04:40.863582 | orchestrator | Sunday 05 April 2026 06:03:53 +0000 (0:00:01.179) 0:50:30.222 ********** 2026-04-05 06:04:40.863590 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:04:40.863599 | orchestrator | 2026-04-05 06:04:40.863608 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 06:04:40.863616 | orchestrator | Sunday 05 April 2026 06:03:54 +0000 (0:00:01.127) 0:50:31.349 ********** 2026-04-05 06:04:40.863625 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 06:04:40.863635 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:04:40.863644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:04:40.863653 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:04:40.863662 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:04:40.863671 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:04:40.863679 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:04:40.863688 | orchestrator | 2026-04-05 06:04:40.863697 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 06:04:40.863705 | orchestrator | Sunday 05 April 2026 06:03:57 +0000 (0:00:02.390) 0:50:33.740 ********** 2026-04-05 06:04:40.863714 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 06:04:40.863723 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:04:40.863731 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:04:40.863740 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:04:40.863749 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:04:40.863757 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:04:40.863791 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:04:40.863802 | orchestrator | 2026-04-05 06:04:40.863812 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-04-05 06:04:40.863823 | orchestrator | Sunday 05 April 2026 06:03:59 +0000 (0:00:02.880) 0:50:36.621 ********** 2026-04-05 06:04:40.863833 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:04:40.863843 | orchestrator | 2026-04-05 06:04:40.863853 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-04-05 06:04:40.863864 | orchestrator | Sunday 05 April 2026 06:04:03 +0000 (0:00:03.629) 0:50:40.250 ********** 2026-04-05 06:04:40.863874 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:04:40.863884 | orchestrator | 2026-04-05 06:04:40.863894 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-04-05 06:04:40.863904 | orchestrator | Sunday 05 April 2026 06:04:06 +0000 (0:00:02.886) 0:50:43.136 ********** 2026-04-05 06:04:40.863915 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:04:40.863925 | orchestrator | 2026-04-05 06:04:40.863935 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-04-05 06:04:40.863945 | orchestrator | Sunday 05 April 2026 06:04:08 +0000 (0:00:02.187) 0:50:45.324 ********** 2026-04-05 06:04:40.863988 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4782', 'value': {'gid': 4782, 'name': 'testbed-node-4', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.14:6817/74546446', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.14:6816', 'nonce': 74546446}, {'type': 'v1', 'addr': '192.168.16.14:6817', 'nonce': 74546446}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-04-05 06:04:40.864003 | orchestrator | 2026-04-05 06:04:40.864014 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-04-05 06:04:40.864024 | orchestrator | Sunday 05 April 2026 06:04:09 +0000 (0:00:01.164) 0:50:46.489 ********** 2026-04-05 06:04:40.864035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-05 06:04:40.864046 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-4) 2026-04-05 06:04:40.864056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-05 06:04:40.864067 | orchestrator | 2026-04-05 06:04:40.864077 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-04-05 06:04:40.864087 | orchestrator | Sunday 05 April 2026 06:04:11 +0000 (0:00:01.599) 0:50:48.088 ********** 2026-04-05 06:04:40.864097 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-04-05 06:04:40.864108 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-04-05 06:04:40.864118 | orchestrator | 2026-04-05 06:04:40.864129 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-04-05 06:04:40.864137 | orchestrator | Sunday 05 April 2026 06:04:12 +0000 (0:00:01.540) 0:50:49.629 ********** 2026-04-05 06:04:40.864146 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:04:40.864155 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:04:40.864163 | orchestrator | 2026-04-05 06:04:40.864172 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-04-05 06:04:40.864180 | orchestrator | Sunday 05 April 2026 06:04:23 +0000 (0:00:10.264) 0:50:59.894 ********** 2026-04-05 06:04:40.864189 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:04:40.864205 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:04:40.864213 | orchestrator | 2026-04-05 06:04:40.864222 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-04-05 06:04:40.864231 | orchestrator | Sunday 05 April 2026 06:04:26 +0000 (0:00:03.819) 0:51:03.713 ********** 2026-04-05 06:04:40.864260 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:04:40.864269 | orchestrator | 2026-04-05 06:04:40.864277 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-04-05 06:04:40.864286 | orchestrator | Sunday 05 April 2026 06:04:29 +0000 (0:00:02.147) 0:51:05.861 ********** 2026-04-05 06:04:40.864295 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:04:40.864303 | orchestrator | 2026-04-05 06:04:40.864312 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-04-05 06:04:40.864320 | orchestrator | 2026-04-05 06:04:40.864329 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:04:40.864337 | orchestrator | Sunday 05 April 2026 06:04:30 +0000 (0:00:01.700) 0:51:07.562 ********** 2026-04-05 06:04:40.864346 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-05 06:04:40.864354 | orchestrator | 2026-04-05 06:04:40.864363 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 06:04:40.864371 | orchestrator | Sunday 05 April 2026 06:04:31 +0000 (0:00:01.105) 0:51:08.667 ********** 2026-04-05 06:04:40.864380 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:04:40.864389 | orchestrator | 2026-04-05 06:04:40.864397 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 06:04:40.864406 | orchestrator | Sunday 05 April 2026 06:04:33 +0000 (0:00:01.492) 0:51:10.160 ********** 2026-04-05 06:04:40.864414 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:04:40.864423 | orchestrator | 2026-04-05 06:04:40.864431 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:04:40.864440 | orchestrator | Sunday 05 April 2026 06:04:34 +0000 (0:00:01.123) 0:51:11.284 ********** 2026-04-05 06:04:40.864449 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:04:40.864457 | orchestrator | 2026-04-05 06:04:40.864466 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:04:40.864474 | orchestrator | Sunday 05 April 2026 06:04:36 +0000 (0:00:01.479) 0:51:12.763 ********** 2026-04-05 06:04:40.864483 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:04:40.864491 | orchestrator | 2026-04-05 06:04:40.864500 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 06:04:40.864508 | orchestrator | Sunday 05 April 2026 06:04:37 +0000 (0:00:01.158) 0:51:13.922 ********** 2026-04-05 06:04:40.864517 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:04:40.864526 | orchestrator | 2026-04-05 06:04:40.864534 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 06:04:40.864543 | orchestrator | Sunday 05 April 2026 06:04:38 +0000 (0:00:01.160) 0:51:15.083 ********** 2026-04-05 06:04:40.864552 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:04:40.864560 | orchestrator | 2026-04-05 06:04:40.864569 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 06:04:40.864577 | orchestrator | Sunday 05 April 2026 06:04:39 +0000 (0:00:01.171) 0:51:16.255 ********** 2026-04-05 06:04:40.864590 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:04:40.864599 | orchestrator | 2026-04-05 06:04:40.864608 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 06:04:40.864616 | orchestrator | Sunday 05 April 2026 06:04:40 +0000 (0:00:01.171) 0:51:17.427 ********** 2026-04-05 06:04:40.864624 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:04:40.864633 | orchestrator | 2026-04-05 06:04:40.864647 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 06:05:06.462421 | orchestrator | Sunday 05 April 2026 06:04:41 +0000 (0:00:01.149) 0:51:18.576 ********** 2026-04-05 06:05:06.462516 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:05:06.462551 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:05:06.462560 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:05:06.462567 | orchestrator | 2026-04-05 06:05:06.462576 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 06:05:06.462588 | orchestrator | Sunday 05 April 2026 06:04:43 +0000 (0:00:02.085) 0:51:20.662 ********** 2026-04-05 06:05:06.462595 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:06.462603 | orchestrator | 2026-04-05 06:05:06.462611 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 06:05:06.462618 | orchestrator | Sunday 05 April 2026 06:04:45 +0000 (0:00:01.271) 0:51:21.934 ********** 2026-04-05 06:05:06.462625 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:05:06.462633 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:05:06.462640 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:05:06.462647 | orchestrator | 2026-04-05 06:05:06.462654 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 06:05:06.462661 | orchestrator | Sunday 05 April 2026 06:04:48 +0000 (0:00:03.270) 0:51:25.205 ********** 2026-04-05 06:05:06.462668 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 06:05:06.462676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 06:05:06.462683 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 06:05:06.462691 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.462698 | orchestrator | 2026-04-05 06:05:06.462705 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 06:05:06.462712 | orchestrator | Sunday 05 April 2026 06:04:50 +0000 (0:00:01.807) 0:51:27.012 ********** 2026-04-05 06:05:06.462721 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 06:05:06.462731 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 06:05:06.462739 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 06:05:06.462746 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.462754 | orchestrator | 2026-04-05 06:05:06.462761 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 06:05:06.462768 | orchestrator | Sunday 05 April 2026 06:04:52 +0000 (0:00:02.044) 0:51:29.057 ********** 2026-04-05 06:05:06.462777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:06.462788 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:06.462796 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:06.462809 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.462816 | orchestrator | 2026-04-05 06:05:06.462835 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 06:05:06.462843 | orchestrator | Sunday 05 April 2026 06:04:53 +0000 (0:00:01.260) 0:51:30.318 ********** 2026-04-05 06:05:06.462866 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 06:04:45.712421', 'end': '2026-04-05 06:04:45.766455', 'delta': '0:00:00.054034', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 06:05:06.462877 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 06:04:46.713288', 'end': '2026-04-05 06:04:46.763413', 'delta': '0:00:00.050125', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 06:05:06.462885 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 06:04:47.282184', 'end': '2026-04-05 06:04:47.326612', 'delta': '0:00:00.044428', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 06:05:06.462893 | orchestrator | 2026-04-05 06:05:06.462900 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 06:05:06.462908 | orchestrator | Sunday 05 April 2026 06:04:54 +0000 (0:00:01.197) 0:51:31.515 ********** 2026-04-05 06:05:06.462915 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:06.462922 | orchestrator | 2026-04-05 06:05:06.462929 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 06:05:06.462936 | orchestrator | Sunday 05 April 2026 06:04:56 +0000 (0:00:01.269) 0:51:32.785 ********** 2026-04-05 06:05:06.462944 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.462951 | orchestrator | 2026-04-05 06:05:06.462958 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 06:05:06.462965 | orchestrator | Sunday 05 April 2026 06:04:57 +0000 (0:00:01.338) 0:51:34.124 ********** 2026-04-05 06:05:06.462972 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:06.462981 | orchestrator | 2026-04-05 06:05:06.462989 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 06:05:06.463004 | orchestrator | Sunday 05 April 2026 06:04:58 +0000 (0:00:01.139) 0:51:35.264 ********** 2026-04-05 06:05:06.463013 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:05:06.463022 | orchestrator | 2026-04-05 06:05:06.463030 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:05:06.463039 | orchestrator | Sunday 05 April 2026 06:05:00 +0000 (0:00:01.981) 0:51:37.245 ********** 2026-04-05 06:05:06.463048 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:06.463057 | orchestrator | 2026-04-05 06:05:06.463065 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 06:05:06.463074 | orchestrator | Sunday 05 April 2026 06:05:01 +0000 (0:00:01.130) 0:51:38.376 ********** 2026-04-05 06:05:06.463083 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.463091 | orchestrator | 2026-04-05 06:05:06.463100 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 06:05:06.463108 | orchestrator | Sunday 05 April 2026 06:05:02 +0000 (0:00:01.192) 0:51:39.569 ********** 2026-04-05 06:05:06.463117 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.463125 | orchestrator | 2026-04-05 06:05:06.463134 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:05:06.463143 | orchestrator | Sunday 05 April 2026 06:05:04 +0000 (0:00:01.238) 0:51:40.808 ********** 2026-04-05 06:05:06.463151 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.463160 | orchestrator | 2026-04-05 06:05:06.463167 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 06:05:06.463178 | orchestrator | Sunday 05 April 2026 06:05:05 +0000 (0:00:01.130) 0:51:41.938 ********** 2026-04-05 06:05:06.463186 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:06.463193 | orchestrator | 2026-04-05 06:05:06.463200 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 06:05:06.463208 | orchestrator | Sunday 05 April 2026 06:05:06 +0000 (0:00:01.125) 0:51:43.064 ********** 2026-04-05 06:05:06.463219 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:12.256637 | orchestrator | 2026-04-05 06:05:12.256727 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 06:05:12.256742 | orchestrator | Sunday 05 April 2026 06:05:07 +0000 (0:00:01.160) 0:51:44.225 ********** 2026-04-05 06:05:12.256753 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:12.256764 | orchestrator | 2026-04-05 06:05:12.256774 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 06:05:12.256784 | orchestrator | Sunday 05 April 2026 06:05:08 +0000 (0:00:01.114) 0:51:45.339 ********** 2026-04-05 06:05:12.256794 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:12.256804 | orchestrator | 2026-04-05 06:05:12.256814 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 06:05:12.256824 | orchestrator | Sunday 05 April 2026 06:05:09 +0000 (0:00:01.169) 0:51:46.508 ********** 2026-04-05 06:05:12.256833 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:12.256843 | orchestrator | 2026-04-05 06:05:12.256853 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 06:05:12.256863 | orchestrator | Sunday 05 April 2026 06:05:10 +0000 (0:00:01.103) 0:51:47.612 ********** 2026-04-05 06:05:12.256872 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:12.256882 | orchestrator | 2026-04-05 06:05:12.256892 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 06:05:12.256901 | orchestrator | Sunday 05 April 2026 06:05:12 +0000 (0:00:01.173) 0:51:48.785 ********** 2026-04-05 06:05:12.256913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:12.256947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}})  2026-04-05 06:05:12.256962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:05:12.256973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}})  2026-04-05 06:05:12.256996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:12.257021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:12.257033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 06:05:12.257044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:12.257062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:05:12.257072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:12.257083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}})  2026-04-05 06:05:12.257097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}})  2026-04-05 06:05:12.257115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:13.550088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:05:13.550200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:13.550217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:05:13.550283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:05:13.550298 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:13.550310 | orchestrator | 2026-04-05 06:05:13.550334 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 06:05:13.550346 | orchestrator | Sunday 05 April 2026 06:05:13 +0000 (0:00:01.320) 0:51:50.106 ********** 2026-04-05 06:05:13.550377 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.550391 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.550412 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.550424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.550437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.550461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674748 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674775 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674849 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674863 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:13.674901 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:50.244781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:05:50.244899 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.244917 | orchestrator | 2026-04-05 06:05:50.244930 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 06:05:50.244942 | orchestrator | Sunday 05 April 2026 06:05:14 +0000 (0:00:01.415) 0:51:51.521 ********** 2026-04-05 06:05:50.244954 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:50.244966 | orchestrator | 2026-04-05 06:05:50.244978 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 06:05:50.244989 | orchestrator | Sunday 05 April 2026 06:05:16 +0000 (0:00:01.471) 0:51:52.993 ********** 2026-04-05 06:05:50.245000 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:50.245011 | orchestrator | 2026-04-05 06:05:50.245023 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:05:50.245034 | orchestrator | Sunday 05 April 2026 06:05:17 +0000 (0:00:01.128) 0:51:54.122 ********** 2026-04-05 06:05:50.245045 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:50.245056 | orchestrator | 2026-04-05 06:05:50.245067 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:05:50.245078 | orchestrator | Sunday 05 April 2026 06:05:18 +0000 (0:00:01.543) 0:51:55.665 ********** 2026-04-05 06:05:50.245091 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245102 | orchestrator | 2026-04-05 06:05:50.245113 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:05:50.245124 | orchestrator | Sunday 05 April 2026 06:05:20 +0000 (0:00:01.218) 0:51:56.884 ********** 2026-04-05 06:05:50.245136 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245147 | orchestrator | 2026-04-05 06:05:50.245158 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:05:50.245169 | orchestrator | Sunday 05 April 2026 06:05:21 +0000 (0:00:01.300) 0:51:58.184 ********** 2026-04-05 06:05:50.245180 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245192 | orchestrator | 2026-04-05 06:05:50.245203 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 06:05:50.245214 | orchestrator | Sunday 05 April 2026 06:05:22 +0000 (0:00:01.279) 0:51:59.464 ********** 2026-04-05 06:05:50.245253 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 06:05:50.245264 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 06:05:50.245275 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 06:05:50.245286 | orchestrator | 2026-04-05 06:05:50.245298 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 06:05:50.245311 | orchestrator | Sunday 05 April 2026 06:05:24 +0000 (0:00:02.106) 0:52:01.571 ********** 2026-04-05 06:05:50.245325 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 06:05:50.245362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 06:05:50.245377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 06:05:50.245389 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245402 | orchestrator | 2026-04-05 06:05:50.245415 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 06:05:50.245428 | orchestrator | Sunday 05 April 2026 06:05:26 +0000 (0:00:01.215) 0:52:02.786 ********** 2026-04-05 06:05:50.245456 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-05 06:05:50.245471 | orchestrator | 2026-04-05 06:05:50.245485 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:05:50.245500 | orchestrator | Sunday 05 April 2026 06:05:27 +0000 (0:00:01.339) 0:52:04.126 ********** 2026-04-05 06:05:50.245513 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245526 | orchestrator | 2026-04-05 06:05:50.245540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:05:50.245554 | orchestrator | Sunday 05 April 2026 06:05:28 +0000 (0:00:01.147) 0:52:05.274 ********** 2026-04-05 06:05:50.245567 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245580 | orchestrator | 2026-04-05 06:05:50.245593 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:05:50.245607 | orchestrator | Sunday 05 April 2026 06:05:29 +0000 (0:00:01.202) 0:52:06.476 ********** 2026-04-05 06:05:50.245619 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245632 | orchestrator | 2026-04-05 06:05:50.245645 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:05:50.245658 | orchestrator | Sunday 05 April 2026 06:05:30 +0000 (0:00:01.137) 0:52:07.614 ********** 2026-04-05 06:05:50.245670 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:50.245681 | orchestrator | 2026-04-05 06:05:50.245692 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:05:50.245703 | orchestrator | Sunday 05 April 2026 06:05:32 +0000 (0:00:01.224) 0:52:08.839 ********** 2026-04-05 06:05:50.245714 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:05:50.245742 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:05:50.245754 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:05:50.245765 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245776 | orchestrator | 2026-04-05 06:05:50.245787 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:05:50.245798 | orchestrator | Sunday 05 April 2026 06:05:33 +0000 (0:00:01.437) 0:52:10.277 ********** 2026-04-05 06:05:50.245809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:05:50.245820 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:05:50.245830 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:05:50.245841 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245852 | orchestrator | 2026-04-05 06:05:50.245863 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:05:50.245874 | orchestrator | Sunday 05 April 2026 06:05:34 +0000 (0:00:01.403) 0:52:11.680 ********** 2026-04-05 06:05:50.245885 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:05:50.245896 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:05:50.245907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:05:50.245918 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.245928 | orchestrator | 2026-04-05 06:05:50.245940 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:05:50.245951 | orchestrator | Sunday 05 April 2026 06:05:36 +0000 (0:00:01.398) 0:52:13.079 ********** 2026-04-05 06:05:50.245962 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:50.245981 | orchestrator | 2026-04-05 06:05:50.245992 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:05:50.246003 | orchestrator | Sunday 05 April 2026 06:05:37 +0000 (0:00:01.190) 0:52:14.270 ********** 2026-04-05 06:05:50.246014 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 06:05:50.246079 | orchestrator | 2026-04-05 06:05:50.246090 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 06:05:50.246101 | orchestrator | Sunday 05 April 2026 06:05:38 +0000 (0:00:01.380) 0:52:15.650 ********** 2026-04-05 06:05:50.246112 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:05:50.246123 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:05:50.246134 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:05:50.246144 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:05:50.246155 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-05 06:05:50.246212 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:05:50.246245 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:05:50.246256 | orchestrator | 2026-04-05 06:05:50.246267 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 06:05:50.246278 | orchestrator | Sunday 05 April 2026 06:05:41 +0000 (0:00:02.280) 0:52:17.931 ********** 2026-04-05 06:05:50.246289 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:05:50.246300 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:05:50.246311 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:05:50.246321 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:05:50.246332 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-05 06:05:50.246343 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:05:50.246354 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:05:50.246365 | orchestrator | 2026-04-05 06:05:50.246382 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-04-05 06:05:50.246393 | orchestrator | Sunday 05 April 2026 06:05:44 +0000 (0:00:02.875) 0:52:20.806 ********** 2026-04-05 06:05:50.246404 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.246415 | orchestrator | 2026-04-05 06:05:50.246426 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 06:05:50.246436 | orchestrator | Sunday 05 April 2026 06:05:45 +0000 (0:00:01.134) 0:52:21.941 ********** 2026-04-05 06:05:50.246447 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-05 06:05:50.246458 | orchestrator | 2026-04-05 06:05:50.246469 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 06:05:50.246480 | orchestrator | Sunday 05 April 2026 06:05:46 +0000 (0:00:01.138) 0:52:23.079 ********** 2026-04-05 06:05:50.246490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-05 06:05:50.246501 | orchestrator | 2026-04-05 06:05:50.246512 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 06:05:50.246523 | orchestrator | Sunday 05 April 2026 06:05:47 +0000 (0:00:01.189) 0:52:24.268 ********** 2026-04-05 06:05:50.246534 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:05:50.246544 | orchestrator | 2026-04-05 06:05:50.246555 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 06:05:50.246566 | orchestrator | Sunday 05 April 2026 06:05:48 +0000 (0:00:01.115) 0:52:25.384 ********** 2026-04-05 06:05:50.246577 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:05:50.246595 | orchestrator | 2026-04-05 06:05:50.246606 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 06:05:50.246625 | orchestrator | Sunday 05 April 2026 06:05:50 +0000 (0:00:01.566) 0:52:26.951 ********** 2026-04-05 06:06:41.600582 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.600698 | orchestrator | 2026-04-05 06:06:41.600716 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 06:06:41.600729 | orchestrator | Sunday 05 April 2026 06:05:51 +0000 (0:00:01.568) 0:52:28.521 ********** 2026-04-05 06:06:41.600740 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.600752 | orchestrator | 2026-04-05 06:06:41.600763 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 06:06:41.600774 | orchestrator | Sunday 05 April 2026 06:05:53 +0000 (0:00:01.598) 0:52:30.119 ********** 2026-04-05 06:06:41.600785 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.600797 | orchestrator | 2026-04-05 06:06:41.600808 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 06:06:41.600819 | orchestrator | Sunday 05 April 2026 06:05:54 +0000 (0:00:01.194) 0:52:31.314 ********** 2026-04-05 06:06:41.600829 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.600840 | orchestrator | 2026-04-05 06:06:41.600851 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 06:06:41.600862 | orchestrator | Sunday 05 April 2026 06:05:55 +0000 (0:00:01.188) 0:52:32.502 ********** 2026-04-05 06:06:41.600873 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.600884 | orchestrator | 2026-04-05 06:06:41.600895 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 06:06:41.600905 | orchestrator | Sunday 05 April 2026 06:05:57 +0000 (0:00:01.225) 0:52:33.728 ********** 2026-04-05 06:06:41.600916 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.600927 | orchestrator | 2026-04-05 06:06:41.600938 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 06:06:41.600949 | orchestrator | Sunday 05 April 2026 06:05:58 +0000 (0:00:01.826) 0:52:35.555 ********** 2026-04-05 06:06:41.600960 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.600970 | orchestrator | 2026-04-05 06:06:41.600981 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 06:06:41.600992 | orchestrator | Sunday 05 April 2026 06:06:00 +0000 (0:00:01.627) 0:52:37.183 ********** 2026-04-05 06:06:41.601003 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601013 | orchestrator | 2026-04-05 06:06:41.601024 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 06:06:41.601035 | orchestrator | Sunday 05 April 2026 06:06:01 +0000 (0:00:01.135) 0:52:38.318 ********** 2026-04-05 06:06:41.601046 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601057 | orchestrator | 2026-04-05 06:06:41.601067 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 06:06:41.601078 | orchestrator | Sunday 05 April 2026 06:06:02 +0000 (0:00:01.166) 0:52:39.485 ********** 2026-04-05 06:06:41.601089 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.601100 | orchestrator | 2026-04-05 06:06:41.601113 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 06:06:41.601126 | orchestrator | Sunday 05 April 2026 06:06:03 +0000 (0:00:01.181) 0:52:40.667 ********** 2026-04-05 06:06:41.601141 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.601154 | orchestrator | 2026-04-05 06:06:41.601166 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 06:06:41.601180 | orchestrator | Sunday 05 April 2026 06:06:05 +0000 (0:00:01.178) 0:52:41.845 ********** 2026-04-05 06:06:41.601193 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.601259 | orchestrator | 2026-04-05 06:06:41.601283 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 06:06:41.601301 | orchestrator | Sunday 05 April 2026 06:06:06 +0000 (0:00:01.142) 0:52:42.987 ********** 2026-04-05 06:06:41.601313 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601352 | orchestrator | 2026-04-05 06:06:41.601364 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 06:06:41.601375 | orchestrator | Sunday 05 April 2026 06:06:07 +0000 (0:00:01.140) 0:52:44.128 ********** 2026-04-05 06:06:41.601386 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601397 | orchestrator | 2026-04-05 06:06:41.601408 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 06:06:41.601419 | orchestrator | Sunday 05 April 2026 06:06:08 +0000 (0:00:01.203) 0:52:45.331 ********** 2026-04-05 06:06:41.601430 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601440 | orchestrator | 2026-04-05 06:06:41.601467 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 06:06:41.601478 | orchestrator | Sunday 05 April 2026 06:06:09 +0000 (0:00:01.143) 0:52:46.474 ********** 2026-04-05 06:06:41.601489 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.601500 | orchestrator | 2026-04-05 06:06:41.601510 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 06:06:41.601521 | orchestrator | Sunday 05 April 2026 06:06:10 +0000 (0:00:01.200) 0:52:47.675 ********** 2026-04-05 06:06:41.601532 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.601542 | orchestrator | 2026-04-05 06:06:41.601553 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 06:06:41.601564 | orchestrator | Sunday 05 April 2026 06:06:12 +0000 (0:00:01.142) 0:52:48.818 ********** 2026-04-05 06:06:41.601575 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601586 | orchestrator | 2026-04-05 06:06:41.601596 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 06:06:41.601607 | orchestrator | Sunday 05 April 2026 06:06:13 +0000 (0:00:01.122) 0:52:49.940 ********** 2026-04-05 06:06:41.601618 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601629 | orchestrator | 2026-04-05 06:06:41.601639 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 06:06:41.601650 | orchestrator | Sunday 05 April 2026 06:06:14 +0000 (0:00:01.261) 0:52:51.202 ********** 2026-04-05 06:06:41.601661 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601672 | orchestrator | 2026-04-05 06:06:41.601683 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 06:06:41.601694 | orchestrator | Sunday 05 April 2026 06:06:15 +0000 (0:00:01.160) 0:52:52.362 ********** 2026-04-05 06:06:41.601704 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601716 | orchestrator | 2026-04-05 06:06:41.601727 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 06:06:41.601756 | orchestrator | Sunday 05 April 2026 06:06:16 +0000 (0:00:01.107) 0:52:53.469 ********** 2026-04-05 06:06:41.601767 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601778 | orchestrator | 2026-04-05 06:06:41.601789 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 06:06:41.601800 | orchestrator | Sunday 05 April 2026 06:06:17 +0000 (0:00:01.120) 0:52:54.590 ********** 2026-04-05 06:06:41.601811 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601822 | orchestrator | 2026-04-05 06:06:41.601832 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 06:06:41.601843 | orchestrator | Sunday 05 April 2026 06:06:18 +0000 (0:00:01.102) 0:52:55.693 ********** 2026-04-05 06:06:41.601854 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601865 | orchestrator | 2026-04-05 06:06:41.601875 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 06:06:41.601887 | orchestrator | Sunday 05 April 2026 06:06:20 +0000 (0:00:01.112) 0:52:56.806 ********** 2026-04-05 06:06:41.601898 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601909 | orchestrator | 2026-04-05 06:06:41.601919 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 06:06:41.601930 | orchestrator | Sunday 05 April 2026 06:06:21 +0000 (0:00:01.130) 0:52:57.936 ********** 2026-04-05 06:06:41.601941 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.601960 | orchestrator | 2026-04-05 06:06:41.601971 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 06:06:41.601982 | orchestrator | Sunday 05 April 2026 06:06:22 +0000 (0:00:01.101) 0:52:59.037 ********** 2026-04-05 06:06:41.601993 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602004 | orchestrator | 2026-04-05 06:06:41.602014 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 06:06:41.602080 | orchestrator | Sunday 05 April 2026 06:06:23 +0000 (0:00:01.126) 0:53:00.164 ********** 2026-04-05 06:06:41.602091 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602102 | orchestrator | 2026-04-05 06:06:41.602112 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 06:06:41.602123 | orchestrator | Sunday 05 April 2026 06:06:24 +0000 (0:00:01.186) 0:53:01.350 ********** 2026-04-05 06:06:41.602134 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602144 | orchestrator | 2026-04-05 06:06:41.602155 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 06:06:41.602165 | orchestrator | Sunday 05 April 2026 06:06:25 +0000 (0:00:01.220) 0:53:02.571 ********** 2026-04-05 06:06:41.602176 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.602187 | orchestrator | 2026-04-05 06:06:41.602198 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 06:06:41.602234 | orchestrator | Sunday 05 April 2026 06:06:27 +0000 (0:00:01.965) 0:53:04.537 ********** 2026-04-05 06:06:41.602246 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.602257 | orchestrator | 2026-04-05 06:06:41.602267 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 06:06:41.602278 | orchestrator | Sunday 05 April 2026 06:06:30 +0000 (0:00:02.322) 0:53:06.859 ********** 2026-04-05 06:06:41.602289 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-05 06:06:41.602301 | orchestrator | 2026-04-05 06:06:41.602312 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 06:06:41.602322 | orchestrator | Sunday 05 April 2026 06:06:31 +0000 (0:00:01.143) 0:53:08.002 ********** 2026-04-05 06:06:41.602333 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602344 | orchestrator | 2026-04-05 06:06:41.602354 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 06:06:41.602365 | orchestrator | Sunday 05 April 2026 06:06:32 +0000 (0:00:01.165) 0:53:09.167 ********** 2026-04-05 06:06:41.602376 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602386 | orchestrator | 2026-04-05 06:06:41.602397 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 06:06:41.602408 | orchestrator | Sunday 05 April 2026 06:06:33 +0000 (0:00:01.115) 0:53:10.283 ********** 2026-04-05 06:06:41.602418 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 06:06:41.602435 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 06:06:41.602446 | orchestrator | 2026-04-05 06:06:41.602457 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 06:06:41.602468 | orchestrator | Sunday 05 April 2026 06:06:35 +0000 (0:00:01.823) 0:53:12.107 ********** 2026-04-05 06:06:41.602478 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:06:41.602489 | orchestrator | 2026-04-05 06:06:41.602500 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 06:06:41.602511 | orchestrator | Sunday 05 April 2026 06:06:36 +0000 (0:00:01.427) 0:53:13.534 ********** 2026-04-05 06:06:41.602521 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602532 | orchestrator | 2026-04-05 06:06:41.602543 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 06:06:41.602554 | orchestrator | Sunday 05 April 2026 06:06:37 +0000 (0:00:01.165) 0:53:14.701 ********** 2026-04-05 06:06:41.602564 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602575 | orchestrator | 2026-04-05 06:06:41.602593 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 06:06:41.602604 | orchestrator | Sunday 05 April 2026 06:06:39 +0000 (0:00:01.155) 0:53:15.856 ********** 2026-04-05 06:06:41.602615 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:06:41.602626 | orchestrator | 2026-04-05 06:06:41.602636 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 06:06:41.602647 | orchestrator | Sunday 05 April 2026 06:06:40 +0000 (0:00:01.272) 0:53:17.128 ********** 2026-04-05 06:06:41.602658 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-05 06:06:41.602669 | orchestrator | 2026-04-05 06:06:41.602680 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 06:06:41.602698 | orchestrator | Sunday 05 April 2026 06:06:41 +0000 (0:00:01.180) 0:53:18.309 ********** 2026-04-05 06:07:27.814647 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:07:27.814752 | orchestrator | 2026-04-05 06:07:27.814766 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 06:07:27.814778 | orchestrator | Sunday 05 April 2026 06:06:43 +0000 (0:00:01.708) 0:53:20.017 ********** 2026-04-05 06:07:27.814788 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 06:07:27.814796 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 06:07:27.814805 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 06:07:27.814814 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.814824 | orchestrator | 2026-04-05 06:07:27.814834 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 06:07:27.814843 | orchestrator | Sunday 05 April 2026 06:06:44 +0000 (0:00:01.240) 0:53:21.257 ********** 2026-04-05 06:07:27.814852 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.814860 | orchestrator | 2026-04-05 06:07:27.814869 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 06:07:27.814878 | orchestrator | Sunday 05 April 2026 06:06:45 +0000 (0:00:01.134) 0:53:22.392 ********** 2026-04-05 06:07:27.814886 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.814895 | orchestrator | 2026-04-05 06:07:27.814903 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 06:07:27.814912 | orchestrator | Sunday 05 April 2026 06:06:46 +0000 (0:00:01.165) 0:53:23.558 ********** 2026-04-05 06:07:27.814921 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.814929 | orchestrator | 2026-04-05 06:07:27.814938 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 06:07:27.814947 | orchestrator | Sunday 05 April 2026 06:06:47 +0000 (0:00:01.148) 0:53:24.706 ********** 2026-04-05 06:07:27.814955 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.814964 | orchestrator | 2026-04-05 06:07:27.814972 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 06:07:27.814981 | orchestrator | Sunday 05 April 2026 06:06:49 +0000 (0:00:01.154) 0:53:25.861 ********** 2026-04-05 06:07:27.814990 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.814998 | orchestrator | 2026-04-05 06:07:27.815007 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 06:07:27.815016 | orchestrator | Sunday 05 April 2026 06:06:50 +0000 (0:00:01.212) 0:53:27.073 ********** 2026-04-05 06:07:27.815024 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:07:27.815033 | orchestrator | 2026-04-05 06:07:27.815042 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 06:07:27.815050 | orchestrator | Sunday 05 April 2026 06:06:52 +0000 (0:00:02.480) 0:53:29.553 ********** 2026-04-05 06:07:27.815059 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:07:27.815068 | orchestrator | 2026-04-05 06:07:27.815076 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 06:07:27.815085 | orchestrator | Sunday 05 April 2026 06:06:53 +0000 (0:00:01.145) 0:53:30.699 ********** 2026-04-05 06:07:27.815116 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-05 06:07:27.815126 | orchestrator | 2026-04-05 06:07:27.815134 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 06:07:27.815143 | orchestrator | Sunday 05 April 2026 06:06:55 +0000 (0:00:01.135) 0:53:31.835 ********** 2026-04-05 06:07:27.815151 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815160 | orchestrator | 2026-04-05 06:07:27.815168 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 06:07:27.815177 | orchestrator | Sunday 05 April 2026 06:06:56 +0000 (0:00:01.132) 0:53:32.968 ********** 2026-04-05 06:07:27.815257 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815269 | orchestrator | 2026-04-05 06:07:27.815280 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 06:07:27.815290 | orchestrator | Sunday 05 April 2026 06:06:57 +0000 (0:00:01.151) 0:53:34.119 ********** 2026-04-05 06:07:27.815302 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815313 | orchestrator | 2026-04-05 06:07:27.815323 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 06:07:27.815345 | orchestrator | Sunday 05 April 2026 06:06:58 +0000 (0:00:01.125) 0:53:35.245 ********** 2026-04-05 06:07:27.815354 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815363 | orchestrator | 2026-04-05 06:07:27.815372 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 06:07:27.815380 | orchestrator | Sunday 05 April 2026 06:06:59 +0000 (0:00:01.366) 0:53:36.612 ********** 2026-04-05 06:07:27.815389 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815398 | orchestrator | 2026-04-05 06:07:27.815406 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 06:07:27.815415 | orchestrator | Sunday 05 April 2026 06:07:01 +0000 (0:00:01.185) 0:53:37.797 ********** 2026-04-05 06:07:27.815424 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815433 | orchestrator | 2026-04-05 06:07:27.815441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 06:07:27.815450 | orchestrator | Sunday 05 April 2026 06:07:02 +0000 (0:00:01.158) 0:53:38.956 ********** 2026-04-05 06:07:27.815458 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815467 | orchestrator | 2026-04-05 06:07:27.815476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 06:07:27.815484 | orchestrator | Sunday 05 April 2026 06:07:03 +0000 (0:00:01.140) 0:53:40.096 ********** 2026-04-05 06:07:27.815493 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815501 | orchestrator | 2026-04-05 06:07:27.815510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 06:07:27.815518 | orchestrator | Sunday 05 April 2026 06:07:04 +0000 (0:00:01.169) 0:53:41.265 ********** 2026-04-05 06:07:27.815527 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:07:27.815536 | orchestrator | 2026-04-05 06:07:27.815544 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 06:07:27.815568 | orchestrator | Sunday 05 April 2026 06:07:05 +0000 (0:00:01.212) 0:53:42.478 ********** 2026-04-05 06:07:27.815577 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-05 06:07:27.815587 | orchestrator | 2026-04-05 06:07:27.815595 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 06:07:27.815604 | orchestrator | Sunday 05 April 2026 06:07:06 +0000 (0:00:01.116) 0:53:43.594 ********** 2026-04-05 06:07:27.815613 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-05 06:07:27.815622 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-05 06:07:27.815630 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-05 06:07:27.815639 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-05 06:07:27.815647 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-05 06:07:27.815656 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-05 06:07:27.815672 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-05 06:07:27.815681 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-05 06:07:27.815690 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 06:07:27.815698 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 06:07:27.815707 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 06:07:27.815716 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 06:07:27.815725 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 06:07:27.815733 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 06:07:27.815742 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-05 06:07:27.815751 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-05 06:07:27.815759 | orchestrator | 2026-04-05 06:07:27.815768 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 06:07:27.815777 | orchestrator | Sunday 05 April 2026 06:07:13 +0000 (0:00:06.594) 0:53:50.189 ********** 2026-04-05 06:07:27.815785 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-05 06:07:27.815794 | orchestrator | 2026-04-05 06:07:27.815803 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 06:07:27.815811 | orchestrator | Sunday 05 April 2026 06:07:14 +0000 (0:00:01.172) 0:53:51.361 ********** 2026-04-05 06:07:27.815820 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:07:27.815829 | orchestrator | 2026-04-05 06:07:27.815838 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 06:07:27.815847 | orchestrator | Sunday 05 April 2026 06:07:16 +0000 (0:00:01.486) 0:53:52.847 ********** 2026-04-05 06:07:27.815855 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:07:27.815864 | orchestrator | 2026-04-05 06:07:27.815873 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 06:07:27.815881 | orchestrator | Sunday 05 April 2026 06:07:18 +0000 (0:00:02.033) 0:53:54.881 ********** 2026-04-05 06:07:27.815890 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815898 | orchestrator | 2026-04-05 06:07:27.815907 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 06:07:27.815916 | orchestrator | Sunday 05 April 2026 06:07:19 +0000 (0:00:01.260) 0:53:56.141 ********** 2026-04-05 06:07:27.815924 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815933 | orchestrator | 2026-04-05 06:07:27.815941 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 06:07:27.815950 | orchestrator | Sunday 05 April 2026 06:07:20 +0000 (0:00:01.186) 0:53:57.328 ********** 2026-04-05 06:07:27.815959 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.815967 | orchestrator | 2026-04-05 06:07:27.815980 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 06:07:27.815989 | orchestrator | Sunday 05 April 2026 06:07:21 +0000 (0:00:01.148) 0:53:58.477 ********** 2026-04-05 06:07:27.815998 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.816006 | orchestrator | 2026-04-05 06:07:27.816015 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 06:07:27.816023 | orchestrator | Sunday 05 April 2026 06:07:22 +0000 (0:00:01.150) 0:53:59.628 ********** 2026-04-05 06:07:27.816032 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.816041 | orchestrator | 2026-04-05 06:07:27.816049 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 06:07:27.816058 | orchestrator | Sunday 05 April 2026 06:07:24 +0000 (0:00:01.191) 0:54:00.819 ********** 2026-04-05 06:07:27.816067 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.816081 | orchestrator | 2026-04-05 06:07:27.816090 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 06:07:27.816098 | orchestrator | Sunday 05 April 2026 06:07:25 +0000 (0:00:01.164) 0:54:01.984 ********** 2026-04-05 06:07:27.816107 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.816116 | orchestrator | 2026-04-05 06:07:27.816124 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 06:07:27.816133 | orchestrator | Sunday 05 April 2026 06:07:26 +0000 (0:00:01.157) 0:54:03.141 ********** 2026-04-05 06:07:27.816141 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.816150 | orchestrator | 2026-04-05 06:07:27.816158 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 06:07:27.816167 | orchestrator | Sunday 05 April 2026 06:07:27 +0000 (0:00:01.214) 0:54:04.356 ********** 2026-04-05 06:07:27.816176 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:07:27.816199 | orchestrator | 2026-04-05 06:07:27.816214 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 06:08:25.192622 | orchestrator | Sunday 05 April 2026 06:07:28 +0000 (0:00:01.134) 0:54:05.490 ********** 2026-04-05 06:08:25.192713 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.192720 | orchestrator | 2026-04-05 06:08:25.192725 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 06:08:25.192730 | orchestrator | Sunday 05 April 2026 06:07:29 +0000 (0:00:01.162) 0:54:06.653 ********** 2026-04-05 06:08:25.192734 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.192738 | orchestrator | 2026-04-05 06:08:25.192742 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 06:08:25.192746 | orchestrator | Sunday 05 April 2026 06:07:31 +0000 (0:00:01.226) 0:54:07.880 ********** 2026-04-05 06:08:25.192750 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-05 06:08:25.192754 | orchestrator | 2026-04-05 06:08:25.192758 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 06:08:25.192762 | orchestrator | Sunday 05 April 2026 06:07:35 +0000 (0:00:04.582) 0:54:12.463 ********** 2026-04-05 06:08:25.192766 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:08:25.192772 | orchestrator | 2026-04-05 06:08:25.192776 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 06:08:25.192779 | orchestrator | Sunday 05 April 2026 06:07:37 +0000 (0:00:01.267) 0:54:13.730 ********** 2026-04-05 06:08:25.192785 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-05 06:08:25.192792 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-05 06:08:25.192799 | orchestrator | 2026-04-05 06:08:25.193684 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 06:08:25.193717 | orchestrator | Sunday 05 April 2026 06:07:42 +0000 (0:00:05.396) 0:54:19.127 ********** 2026-04-05 06:08:25.193737 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.193758 | orchestrator | 2026-04-05 06:08:25.193777 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 06:08:25.193796 | orchestrator | Sunday 05 April 2026 06:07:43 +0000 (0:00:01.147) 0:54:20.274 ********** 2026-04-05 06:08:25.193814 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.193830 | orchestrator | 2026-04-05 06:08:25.193869 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:08:25.193880 | orchestrator | Sunday 05 April 2026 06:07:44 +0000 (0:00:01.139) 0:54:21.413 ********** 2026-04-05 06:08:25.193890 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.193899 | orchestrator | 2026-04-05 06:08:25.193909 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:08:25.193918 | orchestrator | Sunday 05 April 2026 06:07:45 +0000 (0:00:01.199) 0:54:22.613 ********** 2026-04-05 06:08:25.193928 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.193937 | orchestrator | 2026-04-05 06:08:25.193947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:08:25.193956 | orchestrator | Sunday 05 April 2026 06:07:47 +0000 (0:00:01.165) 0:54:23.779 ********** 2026-04-05 06:08:25.193965 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.193975 | orchestrator | 2026-04-05 06:08:25.193999 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:08:25.194009 | orchestrator | Sunday 05 April 2026 06:07:48 +0000 (0:00:01.167) 0:54:24.946 ********** 2026-04-05 06:08:25.194071 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.194089 | orchestrator | 2026-04-05 06:08:25.194107 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:08:25.194126 | orchestrator | Sunday 05 April 2026 06:07:49 +0000 (0:00:01.241) 0:54:26.188 ********** 2026-04-05 06:08:25.194144 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:08:25.194164 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:08:25.194220 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:08:25.194236 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.194253 | orchestrator | 2026-04-05 06:08:25.194268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:08:25.194283 | orchestrator | Sunday 05 April 2026 06:07:50 +0000 (0:00:01.480) 0:54:27.668 ********** 2026-04-05 06:08:25.194300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:08:25.194317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:08:25.194334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:08:25.194352 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.194369 | orchestrator | 2026-04-05 06:08:25.194387 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:08:25.194403 | orchestrator | Sunday 05 April 2026 06:07:52 +0000 (0:00:01.430) 0:54:29.099 ********** 2026-04-05 06:08:25.194420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:08:25.194437 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:08:25.194453 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:08:25.194492 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.194505 | orchestrator | 2026-04-05 06:08:25.194522 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:08:25.194538 | orchestrator | Sunday 05 April 2026 06:07:53 +0000 (0:00:01.414) 0:54:30.514 ********** 2026-04-05 06:08:25.194553 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.194568 | orchestrator | 2026-04-05 06:08:25.194584 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:08:25.194608 | orchestrator | Sunday 05 April 2026 06:07:54 +0000 (0:00:01.159) 0:54:31.673 ********** 2026-04-05 06:08:25.194624 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 06:08:25.194639 | orchestrator | 2026-04-05 06:08:25.194655 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 06:08:25.194670 | orchestrator | Sunday 05 April 2026 06:07:56 +0000 (0:00:01.475) 0:54:33.149 ********** 2026-04-05 06:08:25.194685 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.194701 | orchestrator | 2026-04-05 06:08:25.194717 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-05 06:08:25.194749 | orchestrator | Sunday 05 April 2026 06:07:58 +0000 (0:00:02.477) 0:54:35.626 ********** 2026-04-05 06:08:25.194766 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.194784 | orchestrator | 2026-04-05 06:08:25.194799 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-05 06:08:25.194814 | orchestrator | Sunday 05 April 2026 06:08:00 +0000 (0:00:01.131) 0:54:36.758 ********** 2026-04-05 06:08:25.194823 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4 2026-04-05 06:08:25.194833 | orchestrator | 2026-04-05 06:08:25.194842 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-05 06:08:25.194852 | orchestrator | Sunday 05 April 2026 06:08:01 +0000 (0:00:01.508) 0:54:38.267 ********** 2026-04-05 06:08:25.194861 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 06:08:25.194871 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-05 06:08:25.194880 | orchestrator | 2026-04-05 06:08:25.194889 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-05 06:08:25.194899 | orchestrator | Sunday 05 April 2026 06:08:03 +0000 (0:00:01.954) 0:54:40.222 ********** 2026-04-05 06:08:25.194908 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:08:25.194917 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 06:08:25.194927 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:08:25.194936 | orchestrator | 2026-04-05 06:08:25.194946 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:08:25.194955 | orchestrator | Sunday 05 April 2026 06:08:06 +0000 (0:00:03.182) 0:54:43.405 ********** 2026-04-05 06:08:25.194965 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 06:08:25.194974 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 06:08:25.194983 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.194993 | orchestrator | 2026-04-05 06:08:25.195002 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-05 06:08:25.195012 | orchestrator | Sunday 05 April 2026 06:08:08 +0000 (0:00:01.976) 0:54:45.382 ********** 2026-04-05 06:08:25.195021 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.195030 | orchestrator | 2026-04-05 06:08:25.195040 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-05 06:08:25.195049 | orchestrator | Sunday 05 April 2026 06:08:10 +0000 (0:00:01.523) 0:54:46.905 ********** 2026-04-05 06:08:25.195058 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:25.195068 | orchestrator | 2026-04-05 06:08:25.195077 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-05 06:08:25.195087 | orchestrator | Sunday 05 April 2026 06:08:11 +0000 (0:00:01.195) 0:54:48.101 ********** 2026-04-05 06:08:25.195096 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4 2026-04-05 06:08:25.195107 | orchestrator | 2026-04-05 06:08:25.195124 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-05 06:08:25.195134 | orchestrator | Sunday 05 April 2026 06:08:12 +0000 (0:00:01.540) 0:54:49.641 ********** 2026-04-05 06:08:25.195143 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4 2026-04-05 06:08:25.195153 | orchestrator | 2026-04-05 06:08:25.195162 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-05 06:08:25.195200 | orchestrator | Sunday 05 April 2026 06:08:14 +0000 (0:00:01.726) 0:54:51.368 ********** 2026-04-05 06:08:25.195210 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.195220 | orchestrator | 2026-04-05 06:08:25.195230 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-05 06:08:25.195239 | orchestrator | Sunday 05 April 2026 06:08:16 +0000 (0:00:02.138) 0:54:53.506 ********** 2026-04-05 06:08:25.195249 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.195258 | orchestrator | 2026-04-05 06:08:25.195276 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-05 06:08:25.195285 | orchestrator | Sunday 05 April 2026 06:08:18 +0000 (0:00:02.017) 0:54:55.524 ********** 2026-04-05 06:08:25.195295 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.195304 | orchestrator | 2026-04-05 06:08:25.195313 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-05 06:08:25.195323 | orchestrator | Sunday 05 April 2026 06:08:21 +0000 (0:00:02.329) 0:54:57.854 ********** 2026-04-05 06:08:25.195332 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.195341 | orchestrator | 2026-04-05 06:08:25.195351 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-05 06:08:25.195360 | orchestrator | Sunday 05 April 2026 06:08:23 +0000 (0:00:02.344) 0:55:00.199 ********** 2026-04-05 06:08:25.195370 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:25.195379 | orchestrator | 2026-04-05 06:08:25.195388 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-04-05 06:08:25.195398 | orchestrator | Sunday 05 April 2026 06:08:25 +0000 (0:00:01.649) 0:55:01.848 ********** 2026-04-05 06:08:25.195418 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:08:59.892887 | orchestrator | 2026-04-05 06:08:59.893034 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-04-05 06:08:59.893065 | orchestrator | Sunday 05 April 2026 06:08:26 +0000 (0:00:01.184) 0:55:03.032 ********** 2026-04-05 06:08:59.893078 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:08:59.893091 | orchestrator | 2026-04-05 06:08:59.893102 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-04-05 06:08:59.893113 | orchestrator | 2026-04-05 06:08:59.893124 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:08:59.893135 | orchestrator | Sunday 05 April 2026 06:08:35 +0000 (0:00:09.130) 0:55:12.163 ********** 2026-04-05 06:08:59.893146 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5, testbed-node-3 2026-04-05 06:08:59.893157 | orchestrator | 2026-04-05 06:08:59.893205 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 06:08:59.893216 | orchestrator | Sunday 05 April 2026 06:08:37 +0000 (0:00:01.693) 0:55:13.856 ********** 2026-04-05 06:08:59.893227 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893238 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893249 | orchestrator | 2026-04-05 06:08:59.893260 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 06:08:59.893270 | orchestrator | Sunday 05 April 2026 06:08:38 +0000 (0:00:01.586) 0:55:15.443 ********** 2026-04-05 06:08:59.893281 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893292 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893302 | orchestrator | 2026-04-05 06:08:59.893314 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:08:59.893324 | orchestrator | Sunday 05 April 2026 06:08:40 +0000 (0:00:01.391) 0:55:16.835 ********** 2026-04-05 06:08:59.893336 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893347 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893358 | orchestrator | 2026-04-05 06:08:59.893369 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:08:59.893380 | orchestrator | Sunday 05 April 2026 06:08:41 +0000 (0:00:01.570) 0:55:18.406 ********** 2026-04-05 06:08:59.893390 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893401 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893415 | orchestrator | 2026-04-05 06:08:59.893428 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 06:08:59.893440 | orchestrator | Sunday 05 April 2026 06:08:42 +0000 (0:00:01.291) 0:55:19.698 ********** 2026-04-05 06:08:59.893453 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893466 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893478 | orchestrator | 2026-04-05 06:08:59.893491 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 06:08:59.893504 | orchestrator | Sunday 05 April 2026 06:08:44 +0000 (0:00:01.271) 0:55:20.970 ********** 2026-04-05 06:08:59.893545 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893558 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893571 | orchestrator | 2026-04-05 06:08:59.893584 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 06:08:59.893597 | orchestrator | Sunday 05 April 2026 06:08:45 +0000 (0:00:01.280) 0:55:22.250 ********** 2026-04-05 06:08:59.893609 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:08:59.893622 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:08:59.893636 | orchestrator | 2026-04-05 06:08:59.893648 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 06:08:59.893661 | orchestrator | Sunday 05 April 2026 06:08:47 +0000 (0:00:01.838) 0:55:24.088 ********** 2026-04-05 06:08:59.893674 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893687 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893699 | orchestrator | 2026-04-05 06:08:59.893712 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 06:08:59.893724 | orchestrator | Sunday 05 April 2026 06:08:48 +0000 (0:00:01.308) 0:55:25.397 ********** 2026-04-05 06:08:59.893736 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:08:59.893749 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:08:59.893764 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:08:59.893777 | orchestrator | 2026-04-05 06:08:59.893788 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 06:08:59.893799 | orchestrator | Sunday 05 April 2026 06:08:50 +0000 (0:00:01.806) 0:55:27.203 ********** 2026-04-05 06:08:59.893809 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:08:59.893820 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:08:59.893831 | orchestrator | 2026-04-05 06:08:59.893842 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 06:08:59.893852 | orchestrator | Sunday 05 April 2026 06:08:51 +0000 (0:00:01.392) 0:55:28.596 ********** 2026-04-05 06:08:59.893863 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:08:59.893873 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:08:59.893884 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:08:59.893895 | orchestrator | 2026-04-05 06:08:59.893906 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 06:08:59.893917 | orchestrator | Sunday 05 April 2026 06:08:54 +0000 (0:00:02.991) 0:55:31.587 ********** 2026-04-05 06:08:59.893927 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 06:08:59.893939 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 06:08:59.893949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 06:08:59.893960 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:08:59.893971 | orchestrator | 2026-04-05 06:08:59.893982 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 06:08:59.894106 | orchestrator | Sunday 05 April 2026 06:08:56 +0000 (0:00:01.516) 0:55:33.104 ********** 2026-04-05 06:08:59.894145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 06:08:59.894182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 06:08:59.894194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 06:08:59.894219 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:08:59.894231 | orchestrator | 2026-04-05 06:08:59.894241 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 06:08:59.894252 | orchestrator | Sunday 05 April 2026 06:08:58 +0000 (0:00:02.165) 0:55:35.269 ********** 2026-04-05 06:08:59.894266 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:08:59.894280 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:08:59.894291 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:08:59.894303 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:08:59.894314 | orchestrator | 2026-04-05 06:08:59.894324 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 06:08:59.894335 | orchestrator | Sunday 05 April 2026 06:08:59 +0000 (0:00:01.180) 0:55:36.450 ********** 2026-04-05 06:08:59.894354 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 06:08:52.451596', 'end': '2026-04-05 06:08:52.500196', 'delta': '0:00:00.048600', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 06:08:59.894370 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 06:08:53.091522', 'end': '2026-04-05 06:08:53.142647', 'delta': '0:00:00.051125', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 06:08:59.894390 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 06:08:53.658653', 'end': '2026-04-05 06:08:53.706593', 'delta': '0:00:00.047940', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 06:09:22.217269 | orchestrator | 2026-04-05 06:09:22.217418 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 06:09:22.217444 | orchestrator | Sunday 05 April 2026 06:09:01 +0000 (0:00:01.297) 0:55:37.747 ********** 2026-04-05 06:09:22.217457 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:22.217469 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:22.217480 | orchestrator | 2026-04-05 06:09:22.217491 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 06:09:22.217502 | orchestrator | Sunday 05 April 2026 06:09:03 +0000 (0:00:02.038) 0:55:39.785 ********** 2026-04-05 06:09:22.217513 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.217524 | orchestrator | 2026-04-05 06:09:22.217535 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 06:09:22.217546 | orchestrator | Sunday 05 April 2026 06:09:04 +0000 (0:00:01.290) 0:55:41.076 ********** 2026-04-05 06:09:22.217556 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:22.217567 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:22.217578 | orchestrator | 2026-04-05 06:09:22.217589 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 06:09:22.217600 | orchestrator | Sunday 05 April 2026 06:09:05 +0000 (0:00:01.322) 0:55:42.398 ********** 2026-04-05 06:09:22.217610 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:09:22.217621 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:09:22.217632 | orchestrator | 2026-04-05 06:09:22.217642 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:09:22.217653 | orchestrator | Sunday 05 April 2026 06:09:08 +0000 (0:00:03.108) 0:55:45.506 ********** 2026-04-05 06:09:22.217664 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:22.217674 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:22.217685 | orchestrator | 2026-04-05 06:09:22.217695 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 06:09:22.217706 | orchestrator | Sunday 05 April 2026 06:09:10 +0000 (0:00:01.333) 0:55:46.840 ********** 2026-04-05 06:09:22.217717 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.217727 | orchestrator | 2026-04-05 06:09:22.217739 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 06:09:22.217753 | orchestrator | Sunday 05 April 2026 06:09:11 +0000 (0:00:01.143) 0:55:47.984 ********** 2026-04-05 06:09:22.217765 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.217779 | orchestrator | 2026-04-05 06:09:22.217792 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:09:22.217804 | orchestrator | Sunday 05 April 2026 06:09:12 +0000 (0:00:01.232) 0:55:49.217 ********** 2026-04-05 06:09:22.217817 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.217830 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:22.217841 | orchestrator | 2026-04-05 06:09:22.217851 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 06:09:22.217862 | orchestrator | Sunday 05 April 2026 06:09:13 +0000 (0:00:01.286) 0:55:50.503 ********** 2026-04-05 06:09:22.217873 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.217883 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:22.217894 | orchestrator | 2026-04-05 06:09:22.217904 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 06:09:22.217915 | orchestrator | Sunday 05 April 2026 06:09:15 +0000 (0:00:01.506) 0:55:52.010 ********** 2026-04-05 06:09:22.217926 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:22.217954 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:22.217965 | orchestrator | 2026-04-05 06:09:22.217976 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 06:09:22.218011 | orchestrator | Sunday 05 April 2026 06:09:16 +0000 (0:00:01.301) 0:55:53.311 ********** 2026-04-05 06:09:22.218092 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.218104 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:22.218115 | orchestrator | 2026-04-05 06:09:22.218126 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 06:09:22.218137 | orchestrator | Sunday 05 April 2026 06:09:17 +0000 (0:00:01.257) 0:55:54.568 ********** 2026-04-05 06:09:22.218147 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:22.218193 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:22.218205 | orchestrator | 2026-04-05 06:09:22.218216 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 06:09:22.218227 | orchestrator | Sunday 05 April 2026 06:09:19 +0000 (0:00:01.631) 0:55:56.200 ********** 2026-04-05 06:09:22.218237 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.218248 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:22.218259 | orchestrator | 2026-04-05 06:09:22.218269 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 06:09:22.218281 | orchestrator | Sunday 05 April 2026 06:09:20 +0000 (0:00:01.233) 0:55:57.433 ********** 2026-04-05 06:09:22.218291 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:22.218302 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:22.218312 | orchestrator | 2026-04-05 06:09:22.218323 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 06:09:22.218333 | orchestrator | Sunday 05 April 2026 06:09:22 +0000 (0:00:01.308) 0:55:58.741 ********** 2026-04-05 06:09:22.218347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.218383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}})  2026-04-05 06:09:22.218399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:09:22.218411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}})  2026-04-05 06:09:22.218439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.218452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.218464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 06:09:22.218476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.218495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:09:22.329460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.329562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}})  2026-04-05 06:09:22.329579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}})  2026-04-05 06:09:22.329629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.329663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:09:22.329677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.329688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.329705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.329721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:09:22.329733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}})  2026-04-05 06:09:22.329744 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:22.329756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:09:22.329774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}})  2026-04-05 06:09:22.440667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 06:09:22.440835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}})  2026-04-05 06:09:22.440901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}})  2026-04-05 06:09:22.440922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:09:22.440959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:09:22.440989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:09:24.236730 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:24.236830 | orchestrator | 2026-04-05 06:09:24.236846 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 06:09:24.236859 | orchestrator | Sunday 05 April 2026 06:09:23 +0000 (0:00:01.551) 0:56:00.293 ********** 2026-04-05 06:09:24.236874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.236907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.236921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.236934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.236972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.237030 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.237087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.237114 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.237132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.237150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.237243 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331539 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331673 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331725 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331771 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:24.331785 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.331825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.464857 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.464981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.464999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465049 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465079 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:24.465214 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:53.611110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:53.611291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:53.611336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:09:53.611351 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.611364 | orchestrator | 2026-04-05 06:09:53.611377 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 06:09:53.611389 | orchestrator | Sunday 05 April 2026 06:09:25 +0000 (0:00:02.078) 0:56:02.371 ********** 2026-04-05 06:09:53.611400 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:53.611411 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:53.611422 | orchestrator | 2026-04-05 06:09:53.611433 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 06:09:53.611444 | orchestrator | Sunday 05 April 2026 06:09:27 +0000 (0:00:01.735) 0:56:04.107 ********** 2026-04-05 06:09:53.611455 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:53.611465 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:53.611476 | orchestrator | 2026-04-05 06:09:53.611487 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:09:53.611498 | orchestrator | Sunday 05 April 2026 06:09:28 +0000 (0:00:01.260) 0:56:05.367 ********** 2026-04-05 06:09:53.611508 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:53.611519 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:53.611530 | orchestrator | 2026-04-05 06:09:53.611541 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:09:53.611551 | orchestrator | Sunday 05 April 2026 06:09:30 +0000 (0:00:01.665) 0:56:07.033 ********** 2026-04-05 06:09:53.611562 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.611573 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.611584 | orchestrator | 2026-04-05 06:09:53.611594 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:09:53.611605 | orchestrator | Sunday 05 April 2026 06:09:31 +0000 (0:00:01.228) 0:56:08.261 ********** 2026-04-05 06:09:53.611616 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.611630 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.611642 | orchestrator | 2026-04-05 06:09:53.611655 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:09:53.611668 | orchestrator | Sunday 05 April 2026 06:09:32 +0000 (0:00:01.444) 0:56:09.706 ********** 2026-04-05 06:09:53.611680 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.611693 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.611705 | orchestrator | 2026-04-05 06:09:53.611718 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 06:09:53.611731 | orchestrator | Sunday 05 April 2026 06:09:34 +0000 (0:00:01.538) 0:56:11.244 ********** 2026-04-05 06:09:53.611743 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 06:09:53.611756 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 06:09:53.611783 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 06:09:53.611798 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 06:09:53.611811 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 06:09:53.611822 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 06:09:53.611832 | orchestrator | 2026-04-05 06:09:53.611843 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 06:09:53.611862 | orchestrator | Sunday 05 April 2026 06:09:36 +0000 (0:00:01.869) 0:56:13.114 ********** 2026-04-05 06:09:53.611891 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 06:09:53.611903 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 06:09:53.611914 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 06:09:53.611925 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.611935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 06:09:53.611946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 06:09:53.611956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 06:09:53.611967 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.611978 | orchestrator | 2026-04-05 06:09:53.611988 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 06:09:53.611999 | orchestrator | Sunday 05 April 2026 06:09:37 +0000 (0:00:01.379) 0:56:14.493 ********** 2026-04-05 06:09:53.612010 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5, testbed-node-3 2026-04-05 06:09:53.612022 | orchestrator | 2026-04-05 06:09:53.612033 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:09:53.612044 | orchestrator | Sunday 05 April 2026 06:09:39 +0000 (0:00:01.319) 0:56:15.813 ********** 2026-04-05 06:09:53.612055 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.612065 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.612076 | orchestrator | 2026-04-05 06:09:53.612087 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:09:53.612098 | orchestrator | Sunday 05 April 2026 06:09:40 +0000 (0:00:01.270) 0:56:17.083 ********** 2026-04-05 06:09:53.612108 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.612119 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.612130 | orchestrator | 2026-04-05 06:09:53.612141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:09:53.612190 | orchestrator | Sunday 05 April 2026 06:09:41 +0000 (0:00:01.339) 0:56:18.424 ********** 2026-04-05 06:09:53.612204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.612216 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:09:53.612226 | orchestrator | 2026-04-05 06:09:53.612237 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:09:53.612248 | orchestrator | Sunday 05 April 2026 06:09:43 +0000 (0:00:01.643) 0:56:20.067 ********** 2026-04-05 06:09:53.612259 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:53.612270 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:53.612280 | orchestrator | 2026-04-05 06:09:53.612291 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:09:53.612302 | orchestrator | Sunday 05 April 2026 06:09:44 +0000 (0:00:01.480) 0:56:21.548 ********** 2026-04-05 06:09:53.612313 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:09:53.612324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:09:53.612334 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:09:53.612345 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.612356 | orchestrator | 2026-04-05 06:09:53.612366 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:09:53.612377 | orchestrator | Sunday 05 April 2026 06:09:46 +0000 (0:00:01.611) 0:56:23.160 ********** 2026-04-05 06:09:53.612388 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:09:53.612399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:09:53.612409 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:09:53.612420 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.612431 | orchestrator | 2026-04-05 06:09:53.612441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:09:53.612459 | orchestrator | Sunday 05 April 2026 06:09:47 +0000 (0:00:01.515) 0:56:24.676 ********** 2026-04-05 06:09:53.612470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:09:53.612481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:09:53.612492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:09:53.612502 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:09:53.612513 | orchestrator | 2026-04-05 06:09:53.612524 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:09:53.612535 | orchestrator | Sunday 05 April 2026 06:09:49 +0000 (0:00:01.475) 0:56:26.151 ********** 2026-04-05 06:09:53.612545 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:09:53.612556 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:09:53.612567 | orchestrator | 2026-04-05 06:09:53.612578 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:09:53.612588 | orchestrator | Sunday 05 April 2026 06:09:50 +0000 (0:00:01.438) 0:56:27.590 ********** 2026-04-05 06:09:53.612599 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 06:09:53.612610 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 06:09:53.612621 | orchestrator | 2026-04-05 06:09:53.612631 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 06:09:53.612642 | orchestrator | Sunday 05 April 2026 06:09:52 +0000 (0:00:01.452) 0:56:29.042 ********** 2026-04-05 06:09:53.612653 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:09:53.612664 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:09:53.612681 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:09:53.612692 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:09:53.612703 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:09:53.612714 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-05 06:09:53.612732 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:10:38.831565 | orchestrator | 2026-04-05 06:10:38.831678 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 06:10:38.831696 | orchestrator | Sunday 05 April 2026 06:09:54 +0000 (0:00:02.400) 0:56:31.443 ********** 2026-04-05 06:10:38.831708 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:10:38.831720 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:10:38.831732 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:10:38.831743 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:10:38.831754 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:10:38.831766 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-05 06:10:38.831777 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:10:38.831788 | orchestrator | 2026-04-05 06:10:38.831799 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-04-05 06:10:38.831810 | orchestrator | Sunday 05 April 2026 06:09:57 +0000 (0:00:03.018) 0:56:34.462 ********** 2026-04-05 06:10:38.831821 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.831833 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.831844 | orchestrator | 2026-04-05 06:10:38.831855 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 06:10:38.831866 | orchestrator | Sunday 05 April 2026 06:09:59 +0000 (0:00:01.295) 0:56:35.757 ********** 2026-04-05 06:10:38.831877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5, testbed-node-3 2026-04-05 06:10:38.831914 | orchestrator | 2026-04-05 06:10:38.831925 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 06:10:38.831936 | orchestrator | Sunday 05 April 2026 06:10:00 +0000 (0:00:01.388) 0:56:37.146 ********** 2026-04-05 06:10:38.831947 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5, testbed-node-3 2026-04-05 06:10:38.831958 | orchestrator | 2026-04-05 06:10:38.831969 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 06:10:38.831979 | orchestrator | Sunday 05 April 2026 06:10:01 +0000 (0:00:01.219) 0:56:38.365 ********** 2026-04-05 06:10:38.831990 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832001 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832012 | orchestrator | 2026-04-05 06:10:38.832022 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 06:10:38.832033 | orchestrator | Sunday 05 April 2026 06:10:03 +0000 (0:00:01.356) 0:56:39.722 ********** 2026-04-05 06:10:38.832044 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832055 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832065 | orchestrator | 2026-04-05 06:10:38.832076 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 06:10:38.832089 | orchestrator | Sunday 05 April 2026 06:10:04 +0000 (0:00:01.617) 0:56:41.340 ********** 2026-04-05 06:10:38.832101 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832114 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832126 | orchestrator | 2026-04-05 06:10:38.832139 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 06:10:38.832178 | orchestrator | Sunday 05 April 2026 06:10:06 +0000 (0:00:01.704) 0:56:43.044 ********** 2026-04-05 06:10:38.832191 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832204 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832216 | orchestrator | 2026-04-05 06:10:38.832228 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 06:10:38.832241 | orchestrator | Sunday 05 April 2026 06:10:08 +0000 (0:00:01.726) 0:56:44.771 ********** 2026-04-05 06:10:38.832253 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832266 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832278 | orchestrator | 2026-04-05 06:10:38.832290 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 06:10:38.832302 | orchestrator | Sunday 05 April 2026 06:10:09 +0000 (0:00:01.262) 0:56:46.034 ********** 2026-04-05 06:10:38.832315 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832327 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832339 | orchestrator | 2026-04-05 06:10:38.832352 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 06:10:38.832365 | orchestrator | Sunday 05 April 2026 06:10:10 +0000 (0:00:01.279) 0:56:47.314 ********** 2026-04-05 06:10:38.832377 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832389 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832401 | orchestrator | 2026-04-05 06:10:38.832414 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 06:10:38.832426 | orchestrator | Sunday 05 April 2026 06:10:11 +0000 (0:00:01.220) 0:56:48.535 ********** 2026-04-05 06:10:38.832439 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832449 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832460 | orchestrator | 2026-04-05 06:10:38.832471 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 06:10:38.832482 | orchestrator | Sunday 05 April 2026 06:10:13 +0000 (0:00:01.604) 0:56:50.139 ********** 2026-04-05 06:10:38.832492 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832503 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832514 | orchestrator | 2026-04-05 06:10:38.832539 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 06:10:38.832550 | orchestrator | Sunday 05 April 2026 06:10:15 +0000 (0:00:01.629) 0:56:51.768 ********** 2026-04-05 06:10:38.832569 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832580 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832591 | orchestrator | 2026-04-05 06:10:38.832602 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 06:10:38.832613 | orchestrator | Sunday 05 April 2026 06:10:16 +0000 (0:00:01.259) 0:56:53.028 ********** 2026-04-05 06:10:38.832624 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832653 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832665 | orchestrator | 2026-04-05 06:10:38.832676 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 06:10:38.832687 | orchestrator | Sunday 05 April 2026 06:10:17 +0000 (0:00:01.245) 0:56:54.273 ********** 2026-04-05 06:10:38.832697 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832708 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832719 | orchestrator | 2026-04-05 06:10:38.832730 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 06:10:38.832741 | orchestrator | Sunday 05 April 2026 06:10:18 +0000 (0:00:01.230) 0:56:55.504 ********** 2026-04-05 06:10:38.832751 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832762 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832773 | orchestrator | 2026-04-05 06:10:38.832784 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 06:10:38.832794 | orchestrator | Sunday 05 April 2026 06:10:20 +0000 (0:00:01.353) 0:56:56.858 ********** 2026-04-05 06:10:38.832805 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.832816 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.832827 | orchestrator | 2026-04-05 06:10:38.832837 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 06:10:38.832848 | orchestrator | Sunday 05 April 2026 06:10:21 +0000 (0:00:01.298) 0:56:58.156 ********** 2026-04-05 06:10:38.832859 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832870 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832880 | orchestrator | 2026-04-05 06:10:38.832891 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 06:10:38.832902 | orchestrator | Sunday 05 April 2026 06:10:23 +0000 (0:00:01.680) 0:56:59.837 ********** 2026-04-05 06:10:38.832913 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832924 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832934 | orchestrator | 2026-04-05 06:10:38.832945 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 06:10:38.832956 | orchestrator | Sunday 05 April 2026 06:10:24 +0000 (0:00:01.265) 0:57:01.102 ********** 2026-04-05 06:10:38.832967 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.832978 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.832988 | orchestrator | 2026-04-05 06:10:38.832999 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 06:10:38.833010 | orchestrator | Sunday 05 April 2026 06:10:25 +0000 (0:00:01.316) 0:57:02.419 ********** 2026-04-05 06:10:38.833021 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.833031 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.833042 | orchestrator | 2026-04-05 06:10:38.833053 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 06:10:38.833064 | orchestrator | Sunday 05 April 2026 06:10:26 +0000 (0:00:01.227) 0:57:03.647 ********** 2026-04-05 06:10:38.833074 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:10:38.833085 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:10:38.833096 | orchestrator | 2026-04-05 06:10:38.833107 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 06:10:38.833117 | orchestrator | Sunday 05 April 2026 06:10:28 +0000 (0:00:01.302) 0:57:04.949 ********** 2026-04-05 06:10:38.833128 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833139 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833168 | orchestrator | 2026-04-05 06:10:38.833179 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 06:10:38.833198 | orchestrator | Sunday 05 April 2026 06:10:29 +0000 (0:00:01.307) 0:57:06.256 ********** 2026-04-05 06:10:38.833208 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833219 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833230 | orchestrator | 2026-04-05 06:10:38.833241 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 06:10:38.833251 | orchestrator | Sunday 05 April 2026 06:10:30 +0000 (0:00:01.334) 0:57:07.591 ********** 2026-04-05 06:10:38.833262 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833273 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833284 | orchestrator | 2026-04-05 06:10:38.833295 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 06:10:38.833305 | orchestrator | Sunday 05 April 2026 06:10:32 +0000 (0:00:01.268) 0:57:08.859 ********** 2026-04-05 06:10:38.833316 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833327 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833337 | orchestrator | 2026-04-05 06:10:38.833348 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 06:10:38.833359 | orchestrator | Sunday 05 April 2026 06:10:33 +0000 (0:00:01.238) 0:57:10.097 ********** 2026-04-05 06:10:38.833370 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833381 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833392 | orchestrator | 2026-04-05 06:10:38.833403 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 06:10:38.833413 | orchestrator | Sunday 05 April 2026 06:10:34 +0000 (0:00:01.254) 0:57:11.352 ********** 2026-04-05 06:10:38.833424 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833435 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833446 | orchestrator | 2026-04-05 06:10:38.833457 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 06:10:38.833468 | orchestrator | Sunday 05 April 2026 06:10:35 +0000 (0:00:01.286) 0:57:12.639 ********** 2026-04-05 06:10:38.833479 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833489 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833500 | orchestrator | 2026-04-05 06:10:38.833511 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 06:10:38.833527 | orchestrator | Sunday 05 April 2026 06:10:37 +0000 (0:00:01.271) 0:57:13.910 ********** 2026-04-05 06:10:38.833538 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833549 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833559 | orchestrator | 2026-04-05 06:10:38.833570 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 06:10:38.833581 | orchestrator | Sunday 05 April 2026 06:10:38 +0000 (0:00:01.371) 0:57:15.282 ********** 2026-04-05 06:10:38.833592 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:10:38.833603 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:10:38.833614 | orchestrator | 2026-04-05 06:10:38.833631 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 06:11:26.245547 | orchestrator | Sunday 05 April 2026 06:10:39 +0000 (0:00:01.242) 0:57:16.524 ********** 2026-04-05 06:11:26.245663 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.245680 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.245692 | orchestrator | 2026-04-05 06:11:26.245704 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 06:11:26.245715 | orchestrator | Sunday 05 April 2026 06:10:41 +0000 (0:00:01.330) 0:57:17.855 ********** 2026-04-05 06:11:26.245726 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.245737 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.245748 | orchestrator | 2026-04-05 06:11:26.245759 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 06:11:26.245770 | orchestrator | Sunday 05 April 2026 06:10:42 +0000 (0:00:01.254) 0:57:19.110 ********** 2026-04-05 06:11:26.245781 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.245792 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.245827 | orchestrator | 2026-04-05 06:11:26.245839 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 06:11:26.245850 | orchestrator | Sunday 05 April 2026 06:10:43 +0000 (0:00:01.251) 0:57:20.361 ********** 2026-04-05 06:11:26.245860 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:11:26.245872 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:11:26.245882 | orchestrator | 2026-04-05 06:11:26.245893 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 06:11:26.245904 | orchestrator | Sunday 05 April 2026 06:10:45 +0000 (0:00:02.171) 0:57:22.533 ********** 2026-04-05 06:11:26.245914 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:11:26.245925 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:11:26.245935 | orchestrator | 2026-04-05 06:11:26.245946 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 06:11:26.245957 | orchestrator | Sunday 05 April 2026 06:10:48 +0000 (0:00:02.920) 0:57:25.454 ********** 2026-04-05 06:11:26.245968 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5, testbed-node-3 2026-04-05 06:11:26.245979 | orchestrator | 2026-04-05 06:11:26.246066 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 06:11:26.246081 | orchestrator | Sunday 05 April 2026 06:10:50 +0000 (0:00:01.605) 0:57:27.059 ********** 2026-04-05 06:11:26.246093 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246107 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246119 | orchestrator | 2026-04-05 06:11:26.246132 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 06:11:26.246146 | orchestrator | Sunday 05 April 2026 06:10:51 +0000 (0:00:01.482) 0:57:28.542 ********** 2026-04-05 06:11:26.246158 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246171 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246184 | orchestrator | 2026-04-05 06:11:26.246196 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 06:11:26.246209 | orchestrator | Sunday 05 April 2026 06:10:53 +0000 (0:00:01.306) 0:57:29.848 ********** 2026-04-05 06:11:26.246222 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 06:11:26.246235 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 06:11:26.246247 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 06:11:26.246260 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 06:11:26.246273 | orchestrator | 2026-04-05 06:11:26.246286 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 06:11:26.246297 | orchestrator | Sunday 05 April 2026 06:10:55 +0000 (0:00:01.941) 0:57:31.789 ********** 2026-04-05 06:11:26.246308 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:11:26.246319 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:11:26.246329 | orchestrator | 2026-04-05 06:11:26.246340 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 06:11:26.246350 | orchestrator | Sunday 05 April 2026 06:10:56 +0000 (0:00:01.839) 0:57:33.629 ********** 2026-04-05 06:11:26.246361 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246372 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246382 | orchestrator | 2026-04-05 06:11:26.246393 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 06:11:26.246403 | orchestrator | Sunday 05 April 2026 06:10:58 +0000 (0:00:01.285) 0:57:34.914 ********** 2026-04-05 06:11:26.246414 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246425 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246435 | orchestrator | 2026-04-05 06:11:26.246446 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 06:11:26.246457 | orchestrator | Sunday 05 April 2026 06:10:59 +0000 (0:00:01.352) 0:57:36.266 ********** 2026-04-05 06:11:26.246467 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246478 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246498 | orchestrator | 2026-04-05 06:11:26.246509 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 06:11:26.246519 | orchestrator | Sunday 05 April 2026 06:11:00 +0000 (0:00:01.379) 0:57:37.645 ********** 2026-04-05 06:11:26.246530 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5, testbed-node-3 2026-04-05 06:11:26.246541 | orchestrator | 2026-04-05 06:11:26.246567 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 06:11:26.246578 | orchestrator | Sunday 05 April 2026 06:11:02 +0000 (0:00:01.269) 0:57:38.916 ********** 2026-04-05 06:11:26.246589 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:11:26.246599 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:11:26.246610 | orchestrator | 2026-04-05 06:11:26.246621 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 06:11:26.246632 | orchestrator | Sunday 05 April 2026 06:11:04 +0000 (0:00:01.877) 0:57:40.793 ********** 2026-04-05 06:11:26.246642 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 06:11:26.246671 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 06:11:26.246682 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 06:11:26.246693 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246704 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 06:11:26.246714 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 06:11:26.246725 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 06:11:26.246735 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246746 | orchestrator | 2026-04-05 06:11:26.246757 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 06:11:26.246767 | orchestrator | Sunday 05 April 2026 06:11:05 +0000 (0:00:01.808) 0:57:42.602 ********** 2026-04-05 06:11:26.246778 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246788 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246799 | orchestrator | 2026-04-05 06:11:26.246809 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 06:11:26.246820 | orchestrator | Sunday 05 April 2026 06:11:07 +0000 (0:00:01.262) 0:57:43.865 ********** 2026-04-05 06:11:26.246830 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246841 | orchestrator | 2026-04-05 06:11:26.246852 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 06:11:26.246862 | orchestrator | Sunday 05 April 2026 06:11:08 +0000 (0:00:01.189) 0:57:45.055 ********** 2026-04-05 06:11:26.246873 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246883 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246894 | orchestrator | 2026-04-05 06:11:26.246904 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 06:11:26.246915 | orchestrator | Sunday 05 April 2026 06:11:09 +0000 (0:00:01.354) 0:57:46.409 ********** 2026-04-05 06:11:26.246925 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.246936 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.246947 | orchestrator | 2026-04-05 06:11:26.246957 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 06:11:26.246968 | orchestrator | Sunday 05 April 2026 06:11:11 +0000 (0:00:01.510) 0:57:47.920 ********** 2026-04-05 06:11:26.246978 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247007 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247018 | orchestrator | 2026-04-05 06:11:26.247029 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 06:11:26.247039 | orchestrator | Sunday 05 April 2026 06:11:12 +0000 (0:00:01.250) 0:57:49.170 ********** 2026-04-05 06:11:26.247050 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:11:26.247061 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:11:26.247080 | orchestrator | 2026-04-05 06:11:26.247090 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 06:11:26.247101 | orchestrator | Sunday 05 April 2026 06:11:15 +0000 (0:00:02.590) 0:57:51.761 ********** 2026-04-05 06:11:26.247111 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:11:26.247122 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:11:26.247132 | orchestrator | 2026-04-05 06:11:26.247143 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 06:11:26.247153 | orchestrator | Sunday 05 April 2026 06:11:16 +0000 (0:00:01.508) 0:57:53.269 ********** 2026-04-05 06:11:26.247164 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5, testbed-node-3 2026-04-05 06:11:26.247176 | orchestrator | 2026-04-05 06:11:26.247187 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 06:11:26.247197 | orchestrator | Sunday 05 April 2026 06:11:17 +0000 (0:00:01.387) 0:57:54.656 ********** 2026-04-05 06:11:26.247208 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247219 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247229 | orchestrator | 2026-04-05 06:11:26.247240 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 06:11:26.247251 | orchestrator | Sunday 05 April 2026 06:11:19 +0000 (0:00:01.267) 0:57:55.923 ********** 2026-04-05 06:11:26.247261 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247271 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247282 | orchestrator | 2026-04-05 06:11:26.247293 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 06:11:26.247303 | orchestrator | Sunday 05 April 2026 06:11:20 +0000 (0:00:01.293) 0:57:57.217 ********** 2026-04-05 06:11:26.247314 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247324 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247335 | orchestrator | 2026-04-05 06:11:26.247346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 06:11:26.247356 | orchestrator | Sunday 05 April 2026 06:11:21 +0000 (0:00:01.264) 0:57:58.482 ********** 2026-04-05 06:11:26.247367 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247377 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247388 | orchestrator | 2026-04-05 06:11:26.247398 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 06:11:26.247409 | orchestrator | Sunday 05 April 2026 06:11:22 +0000 (0:00:01.207) 0:57:59.689 ********** 2026-04-05 06:11:26.247420 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247430 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247441 | orchestrator | 2026-04-05 06:11:26.247451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 06:11:26.247468 | orchestrator | Sunday 05 April 2026 06:11:24 +0000 (0:00:01.698) 0:58:01.387 ********** 2026-04-05 06:11:26.247478 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247489 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247500 | orchestrator | 2026-04-05 06:11:26.247510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 06:11:26.247521 | orchestrator | Sunday 05 April 2026 06:11:25 +0000 (0:00:01.289) 0:58:02.677 ********** 2026-04-05 06:11:26.247531 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:11:26.247542 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:11:26.247553 | orchestrator | 2026-04-05 06:11:26.247571 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 06:12:06.370817 | orchestrator | Sunday 05 April 2026 06:11:27 +0000 (0:00:01.305) 0:58:03.982 ********** 2026-04-05 06:12:06.371011 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.371030 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.371042 | orchestrator | 2026-04-05 06:12:06.371054 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 06:12:06.371065 | orchestrator | Sunday 05 April 2026 06:11:28 +0000 (0:00:01.244) 0:58:05.227 ********** 2026-04-05 06:12:06.371101 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:06.371114 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:06.371125 | orchestrator | 2026-04-05 06:12:06.371136 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 06:12:06.371146 | orchestrator | Sunday 05 April 2026 06:11:29 +0000 (0:00:01.285) 0:58:06.513 ********** 2026-04-05 06:12:06.371158 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5, testbed-node-3 2026-04-05 06:12:06.371168 | orchestrator | 2026-04-05 06:12:06.371179 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 06:12:06.371190 | orchestrator | Sunday 05 April 2026 06:11:31 +0000 (0:00:01.625) 0:58:08.138 ********** 2026-04-05 06:12:06.371201 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-05 06:12:06.371212 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-05 06:12:06.371223 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-05 06:12:06.371234 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-05 06:12:06.371244 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-05 06:12:06.371254 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-05 06:12:06.371265 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-05 06:12:06.371275 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-05 06:12:06.371286 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-05 06:12:06.371296 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-05 06:12:06.371306 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-05 06:12:06.371317 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-05 06:12:06.371327 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-05 06:12:06.371338 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-05 06:12:06.371348 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-05 06:12:06.371361 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-05 06:12:06.371374 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 06:12:06.371386 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 06:12:06.371398 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 06:12:06.371410 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 06:12:06.371423 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 06:12:06.371436 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 06:12:06.371448 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 06:12:06.371460 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 06:12:06.371473 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 06:12:06.371485 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 06:12:06.371498 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 06:12:06.371511 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 06:12:06.371523 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-05 06:12:06.371536 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-05 06:12:06.371548 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-05 06:12:06.371560 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-05 06:12:06.371573 | orchestrator | 2026-04-05 06:12:06.371586 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 06:12:06.371599 | orchestrator | Sunday 05 April 2026 06:11:38 +0000 (0:00:06.790) 0:58:14.929 ********** 2026-04-05 06:12:06.371611 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5, testbed-node-3 2026-04-05 06:12:06.371632 | orchestrator | 2026-04-05 06:12:06.371645 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 06:12:06.371657 | orchestrator | Sunday 05 April 2026 06:11:39 +0000 (0:00:01.383) 0:58:16.312 ********** 2026-04-05 06:12:06.371671 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:12:06.371685 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:12:06.371699 | orchestrator | 2026-04-05 06:12:06.371735 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 06:12:06.371757 | orchestrator | Sunday 05 April 2026 06:11:41 +0000 (0:00:01.623) 0:58:17.936 ********** 2026-04-05 06:12:06.371775 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:12:06.371791 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:12:06.371802 | orchestrator | 2026-04-05 06:12:06.371813 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 06:12:06.371867 | orchestrator | Sunday 05 April 2026 06:11:43 +0000 (0:00:02.080) 0:58:20.016 ********** 2026-04-05 06:12:06.371881 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.371892 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.371903 | orchestrator | 2026-04-05 06:12:06.371913 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 06:12:06.371924 | orchestrator | Sunday 05 April 2026 06:11:44 +0000 (0:00:01.611) 0:58:21.627 ********** 2026-04-05 06:12:06.371935 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.371945 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.371956 | orchestrator | 2026-04-05 06:12:06.371966 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 06:12:06.371977 | orchestrator | Sunday 05 April 2026 06:11:46 +0000 (0:00:01.292) 0:58:22.919 ********** 2026-04-05 06:12:06.371988 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.371998 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372009 | orchestrator | 2026-04-05 06:12:06.372019 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 06:12:06.372030 | orchestrator | Sunday 05 April 2026 06:11:47 +0000 (0:00:01.286) 0:58:24.206 ********** 2026-04-05 06:12:06.372040 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372051 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372068 | orchestrator | 2026-04-05 06:12:06.372085 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 06:12:06.372103 | orchestrator | Sunday 05 April 2026 06:11:48 +0000 (0:00:01.327) 0:58:25.534 ********** 2026-04-05 06:12:06.372121 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372138 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372149 | orchestrator | 2026-04-05 06:12:06.372159 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 06:12:06.372170 | orchestrator | Sunday 05 April 2026 06:11:50 +0000 (0:00:01.247) 0:58:26.782 ********** 2026-04-05 06:12:06.372181 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372191 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372202 | orchestrator | 2026-04-05 06:12:06.372213 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 06:12:06.372223 | orchestrator | Sunday 05 April 2026 06:11:51 +0000 (0:00:01.263) 0:58:28.046 ********** 2026-04-05 06:12:06.372234 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372244 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372255 | orchestrator | 2026-04-05 06:12:06.372266 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 06:12:06.372285 | orchestrator | Sunday 05 April 2026 06:11:52 +0000 (0:00:01.418) 0:58:29.464 ********** 2026-04-05 06:12:06.372295 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372306 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372317 | orchestrator | 2026-04-05 06:12:06.372328 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 06:12:06.372338 | orchestrator | Sunday 05 April 2026 06:11:54 +0000 (0:00:01.394) 0:58:30.858 ********** 2026-04-05 06:12:06.372349 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372360 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372370 | orchestrator | 2026-04-05 06:12:06.372381 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 06:12:06.372391 | orchestrator | Sunday 05 April 2026 06:11:55 +0000 (0:00:01.294) 0:58:32.153 ********** 2026-04-05 06:12:06.372402 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372413 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372423 | orchestrator | 2026-04-05 06:12:06.372434 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 06:12:06.372444 | orchestrator | Sunday 05 April 2026 06:11:56 +0000 (0:00:01.234) 0:58:33.388 ********** 2026-04-05 06:12:06.372455 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:06.372465 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:06.372476 | orchestrator | 2026-04-05 06:12:06.372487 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 06:12:06.372497 | orchestrator | Sunday 05 April 2026 06:11:57 +0000 (0:00:01.238) 0:58:34.626 ********** 2026-04-05 06:12:06.372508 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-05 06:12:06.372519 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-05 06:12:06.372529 | orchestrator | 2026-04-05 06:12:06.372540 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 06:12:06.372553 | orchestrator | Sunday 05 April 2026 06:12:02 +0000 (0:00:04.603) 0:58:39.229 ********** 2026-04-05 06:12:06.372572 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:12:06.372591 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:12:06.372609 | orchestrator | 2026-04-05 06:12:06.372627 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 06:12:06.372639 | orchestrator | Sunday 05 April 2026 06:12:03 +0000 (0:00:01.305) 0:58:40.534 ********** 2026-04-05 06:12:06.372659 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-05 06:12:06.372682 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-05 06:12:58.186288 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-05 06:12:58.186403 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-05 06:12:58.186444 | orchestrator | 2026-04-05 06:12:58.186458 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 06:12:58.186471 | orchestrator | Sunday 05 April 2026 06:12:09 +0000 (0:00:05.689) 0:58:46.224 ********** 2026-04-05 06:12:58.186482 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.186494 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:58.186505 | orchestrator | 2026-04-05 06:12:58.186516 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 06:12:58.186527 | orchestrator | Sunday 05 April 2026 06:12:10 +0000 (0:00:01.340) 0:58:47.564 ********** 2026-04-05 06:12:58.186538 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.186549 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:58.186559 | orchestrator | 2026-04-05 06:12:58.186571 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:12:58.186583 | orchestrator | Sunday 05 April 2026 06:12:12 +0000 (0:00:01.455) 0:58:49.019 ********** 2026-04-05 06:12:58.186594 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.186605 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:58.186615 | orchestrator | 2026-04-05 06:12:58.186626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:12:58.186637 | orchestrator | Sunday 05 April 2026 06:12:13 +0000 (0:00:01.246) 0:58:50.266 ********** 2026-04-05 06:12:58.186682 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.186694 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:58.186705 | orchestrator | 2026-04-05 06:12:58.186716 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:12:58.186726 | orchestrator | Sunday 05 April 2026 06:12:14 +0000 (0:00:01.306) 0:58:51.572 ********** 2026-04-05 06:12:58.186737 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.186748 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:58.186758 | orchestrator | 2026-04-05 06:12:58.186769 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:12:58.186780 | orchestrator | Sunday 05 April 2026 06:12:16 +0000 (0:00:01.355) 0:58:52.927 ********** 2026-04-05 06:12:58.186790 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.186802 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.186814 | orchestrator | 2026-04-05 06:12:58.186827 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:12:58.186839 | orchestrator | Sunday 05 April 2026 06:12:17 +0000 (0:00:01.395) 0:58:54.323 ********** 2026-04-05 06:12:58.186852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:12:58.186864 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:12:58.186876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:12:58.186889 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.186902 | orchestrator | 2026-04-05 06:12:58.186914 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:12:58.186927 | orchestrator | Sunday 05 April 2026 06:12:19 +0000 (0:00:02.088) 0:58:56.411 ********** 2026-04-05 06:12:58.186939 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:12:58.186951 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:12:58.186963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:12:58.186975 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.186987 | orchestrator | 2026-04-05 06:12:58.187000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:12:58.187012 | orchestrator | Sunday 05 April 2026 06:12:21 +0000 (0:00:01.483) 0:58:57.895 ********** 2026-04-05 06:12:58.187025 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:12:58.187036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:12:58.187049 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:12:58.187069 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.187089 | orchestrator | 2026-04-05 06:12:58.187116 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:12:58.187137 | orchestrator | Sunday 05 April 2026 06:12:22 +0000 (0:00:01.539) 0:58:59.434 ********** 2026-04-05 06:12:58.187174 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.187192 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.187207 | orchestrator | 2026-04-05 06:12:58.187226 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:12:58.187244 | orchestrator | Sunday 05 April 2026 06:12:24 +0000 (0:00:01.369) 0:59:00.804 ********** 2026-04-05 06:12:58.187263 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 06:12:58.187283 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 06:12:58.187301 | orchestrator | 2026-04-05 06:12:58.187320 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 06:12:58.187339 | orchestrator | Sunday 05 April 2026 06:12:25 +0000 (0:00:01.501) 0:59:02.305 ********** 2026-04-05 06:12:58.187350 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.187361 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.187371 | orchestrator | 2026-04-05 06:12:58.187402 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-05 06:12:58.187413 | orchestrator | Sunday 05 April 2026 06:12:27 +0000 (0:00:01.858) 0:59:04.164 ********** 2026-04-05 06:12:58.187424 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.187435 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:58.187445 | orchestrator | 2026-04-05 06:12:58.187456 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-05 06:12:58.187467 | orchestrator | Sunday 05 April 2026 06:12:29 +0000 (0:00:01.713) 0:59:05.877 ********** 2026-04-05 06:12:58.187477 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5, testbed-node-3 2026-04-05 06:12:58.187489 | orchestrator | 2026-04-05 06:12:58.187499 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-05 06:12:58.187510 | orchestrator | Sunday 05 April 2026 06:12:30 +0000 (0:00:01.352) 0:59:07.230 ********** 2026-04-05 06:12:58.187521 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 06:12:58.187531 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-05 06:12:58.187542 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-05 06:12:58.187552 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-05 06:12:58.187562 | orchestrator | 2026-04-05 06:12:58.187573 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-05 06:12:58.187584 | orchestrator | Sunday 05 April 2026 06:12:32 +0000 (0:00:02.033) 0:59:09.263 ********** 2026-04-05 06:12:58.187595 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:12:58.187605 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 06:12:58.187616 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:12:58.187627 | orchestrator | 2026-04-05 06:12:58.187637 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:12:58.187678 | orchestrator | Sunday 05 April 2026 06:12:35 +0000 (0:00:03.208) 0:59:12.472 ********** 2026-04-05 06:12:58.187689 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:12:58.187700 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 06:12:58.187711 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.187722 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 06:12:58.187732 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 06:12:58.187749 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.187769 | orchestrator | 2026-04-05 06:12:58.187790 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-05 06:12:58.187810 | orchestrator | Sunday 05 April 2026 06:12:37 +0000 (0:00:02.099) 0:59:14.571 ********** 2026-04-05 06:12:58.187844 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.187866 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.187884 | orchestrator | 2026-04-05 06:12:58.187896 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-05 06:12:58.187907 | orchestrator | Sunday 05 April 2026 06:12:39 +0000 (0:00:01.736) 0:59:16.307 ********** 2026-04-05 06:12:58.187917 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.187928 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:12:58.187938 | orchestrator | 2026-04-05 06:12:58.187949 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-05 06:12:58.187959 | orchestrator | Sunday 05 April 2026 06:12:41 +0000 (0:00:01.757) 0:59:18.065 ********** 2026-04-05 06:12:58.187970 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5, testbed-node-3 2026-04-05 06:12:58.187981 | orchestrator | 2026-04-05 06:12:58.187992 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-05 06:12:58.188002 | orchestrator | Sunday 05 April 2026 06:12:42 +0000 (0:00:01.254) 0:59:19.320 ********** 2026-04-05 06:12:58.188013 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5, testbed-node-3 2026-04-05 06:12:58.188023 | orchestrator | 2026-04-05 06:12:58.188034 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-05 06:12:58.188044 | orchestrator | Sunday 05 April 2026 06:12:43 +0000 (0:00:01.248) 0:59:20.568 ********** 2026-04-05 06:12:58.188055 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.188066 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.188076 | orchestrator | 2026-04-05 06:12:58.188087 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-05 06:12:58.188097 | orchestrator | Sunday 05 April 2026 06:12:46 +0000 (0:00:02.209) 0:59:22.778 ********** 2026-04-05 06:12:58.188108 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.188118 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.188129 | orchestrator | 2026-04-05 06:12:58.188139 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-05 06:12:58.188150 | orchestrator | Sunday 05 April 2026 06:12:48 +0000 (0:00:02.229) 0:59:25.007 ********** 2026-04-05 06:12:58.188161 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.188171 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.188182 | orchestrator | 2026-04-05 06:12:58.188193 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-05 06:12:58.188211 | orchestrator | Sunday 05 April 2026 06:12:50 +0000 (0:00:02.320) 0:59:27.328 ********** 2026-04-05 06:12:58.188222 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:12:58.188233 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:12:58.188243 | orchestrator | 2026-04-05 06:12:58.188254 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-05 06:12:58.188265 | orchestrator | Sunday 05 April 2026 06:12:54 +0000 (0:00:03.433) 0:59:30.761 ********** 2026-04-05 06:12:58.188275 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:12:58.188286 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:12:58.188297 | orchestrator | 2026-04-05 06:12:58.188307 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-04-05 06:12:58.188318 | orchestrator | Sunday 05 April 2026 06:12:55 +0000 (0:00:01.772) 0:59:32.534 ********** 2026-04-05 06:12:58.188328 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:12:58.188347 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:13:23.350462 | orchestrator | 2026-04-05 06:13:23.350653 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-05 06:13:23.350675 | orchestrator | 2026-04-05 06:13:23.350687 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:13:23.350699 | orchestrator | Sunday 05 April 2026 06:12:59 +0000 (0:00:03.671) 0:59:36.205 ********** 2026-04-05 06:13:23.350710 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-05 06:13:23.350745 | orchestrator | 2026-04-05 06:13:23.350757 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 06:13:23.350767 | orchestrator | Sunday 05 April 2026 06:13:00 +0000 (0:00:01.411) 0:59:37.617 ********** 2026-04-05 06:13:23.350778 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.350790 | orchestrator | 2026-04-05 06:13:23.350801 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 06:13:23.350811 | orchestrator | Sunday 05 April 2026 06:13:02 +0000 (0:00:01.439) 0:59:39.056 ********** 2026-04-05 06:13:23.350822 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.350833 | orchestrator | 2026-04-05 06:13:23.350844 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:13:23.350855 | orchestrator | Sunday 05 April 2026 06:13:03 +0000 (0:00:01.143) 0:59:40.200 ********** 2026-04-05 06:13:23.350865 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.350876 | orchestrator | 2026-04-05 06:13:23.350887 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:13:23.350898 | orchestrator | Sunday 05 April 2026 06:13:04 +0000 (0:00:01.475) 0:59:41.676 ********** 2026-04-05 06:13:23.350908 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.350919 | orchestrator | 2026-04-05 06:13:23.350930 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 06:13:23.350941 | orchestrator | Sunday 05 April 2026 06:13:06 +0000 (0:00:01.154) 0:59:42.830 ********** 2026-04-05 06:13:23.350952 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.350962 | orchestrator | 2026-04-05 06:13:23.350973 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 06:13:23.350984 | orchestrator | Sunday 05 April 2026 06:13:07 +0000 (0:00:01.201) 0:59:44.032 ********** 2026-04-05 06:13:23.350997 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.351009 | orchestrator | 2026-04-05 06:13:23.351022 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 06:13:23.351036 | orchestrator | Sunday 05 April 2026 06:13:08 +0000 (0:00:01.139) 0:59:45.171 ********** 2026-04-05 06:13:23.351048 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:23.351061 | orchestrator | 2026-04-05 06:13:23.351073 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 06:13:23.351086 | orchestrator | Sunday 05 April 2026 06:13:09 +0000 (0:00:01.146) 0:59:46.318 ********** 2026-04-05 06:13:23.351098 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.351110 | orchestrator | 2026-04-05 06:13:23.351123 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 06:13:23.351135 | orchestrator | Sunday 05 April 2026 06:13:10 +0000 (0:00:01.167) 0:59:47.485 ********** 2026-04-05 06:13:23.351148 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:13:23.351161 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:13:23.351173 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:13:23.351185 | orchestrator | 2026-04-05 06:13:23.351197 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 06:13:23.351209 | orchestrator | Sunday 05 April 2026 06:13:12 +0000 (0:00:02.202) 0:59:49.687 ********** 2026-04-05 06:13:23.351221 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:23.351233 | orchestrator | 2026-04-05 06:13:23.351246 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 06:13:23.351258 | orchestrator | Sunday 05 April 2026 06:13:14 +0000 (0:00:01.242) 0:59:50.930 ********** 2026-04-05 06:13:23.351271 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:13:23.351284 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:13:23.351296 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:13:23.351308 | orchestrator | 2026-04-05 06:13:23.351321 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 06:13:23.351342 | orchestrator | Sunday 05 April 2026 06:13:17 +0000 (0:00:03.305) 0:59:54.235 ********** 2026-04-05 06:13:23.351353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 06:13:23.351364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 06:13:23.351375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 06:13:23.351385 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:23.351396 | orchestrator | 2026-04-05 06:13:23.351407 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 06:13:23.351432 | orchestrator | Sunday 05 April 2026 06:13:19 +0000 (0:00:01.878) 0:59:56.114 ********** 2026-04-05 06:13:23.351447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 06:13:23.351469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 06:13:23.351510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 06:13:23.351531 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:23.351549 | orchestrator | 2026-04-05 06:13:23.351635 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 06:13:23.351647 | orchestrator | Sunday 05 April 2026 06:13:21 +0000 (0:00:02.430) 0:59:58.544 ********** 2026-04-05 06:13:23.351661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:23.351675 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:23.351686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:23.351697 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:23.351708 | orchestrator | 2026-04-05 06:13:23.351719 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 06:13:23.351730 | orchestrator | Sunday 05 April 2026 06:13:23 +0000 (0:00:01.288) 0:59:59.833 ********** 2026-04-05 06:13:23.351744 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 06:13:14.738626', 'end': '2026-04-05 06:13:14.796472', 'delta': '0:00:00.057846', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 06:13:23.351768 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 06:13:15.822621', 'end': '2026-04-05 06:13:15.867847', 'delta': '0:00:00.045226', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 06:13:23.351787 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 06:13:16.332247', 'end': '2026-04-05 06:13:16.373765', 'delta': '0:00:00.041518', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 06:13:23.351799 | orchestrator | 2026-04-05 06:13:23.351819 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 06:13:42.171404 | orchestrator | Sunday 05 April 2026 06:13:24 +0000 (0:00:01.200) 1:00:01.033 ********** 2026-04-05 06:13:42.171577 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:42.171603 | orchestrator | 2026-04-05 06:13:42.171617 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 06:13:42.171631 | orchestrator | Sunday 05 April 2026 06:13:25 +0000 (0:00:01.280) 1:00:02.313 ********** 2026-04-05 06:13:42.171644 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:42.171658 | orchestrator | 2026-04-05 06:13:42.171673 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 06:13:42.171687 | orchestrator | Sunday 05 April 2026 06:13:26 +0000 (0:00:01.286) 1:00:03.600 ********** 2026-04-05 06:13:42.171701 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:42.171714 | orchestrator | 2026-04-05 06:13:42.171742 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 06:13:42.171756 | orchestrator | Sunday 05 April 2026 06:13:28 +0000 (0:00:01.174) 1:00:04.775 ********** 2026-04-05 06:13:42.171771 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:13:42.171785 | orchestrator | 2026-04-05 06:13:42.171799 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:13:42.171807 | orchestrator | Sunday 05 April 2026 06:13:30 +0000 (0:00:02.029) 1:00:06.804 ********** 2026-04-05 06:13:42.171815 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:42.171823 | orchestrator | 2026-04-05 06:13:42.171832 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 06:13:42.171840 | orchestrator | Sunday 05 April 2026 06:13:31 +0000 (0:00:01.162) 1:00:07.967 ********** 2026-04-05 06:13:42.171848 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:42.171856 | orchestrator | 2026-04-05 06:13:42.171864 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 06:13:42.171873 | orchestrator | Sunday 05 April 2026 06:13:32 +0000 (0:00:01.137) 1:00:09.104 ********** 2026-04-05 06:13:42.171881 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:42.171914 | orchestrator | 2026-04-05 06:13:42.171922 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:13:42.171930 | orchestrator | Sunday 05 April 2026 06:13:33 +0000 (0:00:01.236) 1:00:10.340 ********** 2026-04-05 06:13:42.171938 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:42.171947 | orchestrator | 2026-04-05 06:13:42.171957 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 06:13:42.171966 | orchestrator | Sunday 05 April 2026 06:13:34 +0000 (0:00:01.109) 1:00:11.450 ********** 2026-04-05 06:13:42.171976 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:42.171985 | orchestrator | 2026-04-05 06:13:42.171996 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 06:13:42.172010 | orchestrator | Sunday 05 April 2026 06:13:35 +0000 (0:00:01.146) 1:00:12.596 ********** 2026-04-05 06:13:42.172023 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:42.172036 | orchestrator | 2026-04-05 06:13:42.172049 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 06:13:42.172063 | orchestrator | Sunday 05 April 2026 06:13:37 +0000 (0:00:01.194) 1:00:13.791 ********** 2026-04-05 06:13:42.172075 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:42.172089 | orchestrator | 2026-04-05 06:13:42.172103 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 06:13:42.172116 | orchestrator | Sunday 05 April 2026 06:13:38 +0000 (0:00:01.089) 1:00:14.881 ********** 2026-04-05 06:13:42.172129 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:42.172142 | orchestrator | 2026-04-05 06:13:42.172157 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 06:13:42.172171 | orchestrator | Sunday 05 April 2026 06:13:39 +0000 (0:00:01.366) 1:00:16.248 ********** 2026-04-05 06:13:42.172186 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:42.172201 | orchestrator | 2026-04-05 06:13:42.172215 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 06:13:42.172230 | orchestrator | Sunday 05 April 2026 06:13:40 +0000 (0:00:01.166) 1:00:17.414 ********** 2026-04-05 06:13:42.172244 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:13:42.172258 | orchestrator | 2026-04-05 06:13:42.172272 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 06:13:42.172286 | orchestrator | Sunday 05 April 2026 06:13:41 +0000 (0:00:01.250) 1:00:18.664 ********** 2026-04-05 06:13:42.172304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:42.172340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}})  2026-04-05 06:13:42.172383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:13:42.172415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}})  2026-04-05 06:13:42.172429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:42.172444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:42.172458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 06:13:42.172474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:42.172520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:13:42.172548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:43.462941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}})  2026-04-05 06:13:43.463068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}})  2026-04-05 06:13:43.463095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:43.463143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:13:43.463217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:43.463234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:13:43.463246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:13:43.463259 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:13:43.463272 | orchestrator | 2026-04-05 06:13:43.463284 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 06:13:43.463315 | orchestrator | Sunday 05 April 2026 06:13:43 +0000 (0:00:01.380) 1:00:20.045 ********** 2026-04-05 06:13:43.463329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.463342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a', 'dm-uuid-LVM-Wa0ywAPEsth3PkAcFdQjJ1wwgCe1rlsj8RyzQ2wuijAAPQOjcJyHabU3njzMKmSL'], 'uuids': ['e6543215-ff22-4095-81ab-ed44a1bf8cb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.463361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22', 'scsi-SQEMU_QEMU_HARDDISK_2d4d21e8-ae21-4c18-add4-77055e4ecd22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2d4d21e8', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.463391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JcLpS7-Istp-ErkR-0N1a-8DAO-xdkW-2MNJrh', 'scsi-0QEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51', 'scsi-SQEMU_QEMU_HARDDISK_a4ecbb0a-2836-403e-9178-b4fc03a4ee51'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583239 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs', 'dm-uuid-CRYPT-LUKS2-85cdff47472b4414a3ddb4c2fa7a215f-jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2b14998b--6337--5d33--8563--647c08b40df2-osd--block--2b14998b--6337--5d33--8563--647c08b40df2', 'dm-uuid-LVM-G3OCy4lFYpBms6guD4lYPb76KuIZmRZ5jt95nNh9WjhvHW0ar76FV1kuMNWiR2Zs'], 'uuids': ['85cdff47-472b-4414-a3dd-b4c2fa7a215f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a4ecbb0a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['jt95nN-h9Wj-hvHW-0ar7-6FV1-kuMN-WiR2Zs']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-kXF833-s1K1-A6Sx-gbIH-023A-sL0I-yYhQjY', 'scsi-0QEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6', 'scsi-SQEMU_QEMU_HARDDISK_c4b125a1-49de-45bb-8abb-de12a0ea86b6'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c4b125a1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4671660f--3880--5125--9575--24d25698498a-osd--block--4671660f--3880--5125--9575--24d25698498a']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:13:43.583358 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e2ff4b61', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2ff4b61-09b0-482b-8c24-7e588d8d5007-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:14:14.475919 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:14:14.476037 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:14:14.476070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL', 'dm-uuid-CRYPT-LUKS2-e6543215ff22409581abed44a1bf8cb1-8RyzQ2-wuij-AAPQ-OjcJ-yHab-U3nj-zMKmSL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:14:14.476111 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476125 | orchestrator | 2026-04-05 06:14:14.476137 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 06:14:14.476149 | orchestrator | Sunday 05 April 2026 06:13:44 +0000 (0:00:01.443) 1:00:21.489 ********** 2026-04-05 06:14:14.476159 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:14:14.476171 | orchestrator | 2026-04-05 06:14:14.476182 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 06:14:14.476193 | orchestrator | Sunday 05 April 2026 06:13:46 +0000 (0:00:01.529) 1:00:23.018 ********** 2026-04-05 06:14:14.476203 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:14:14.476214 | orchestrator | 2026-04-05 06:14:14.476224 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:14:14.476235 | orchestrator | Sunday 05 April 2026 06:13:47 +0000 (0:00:01.170) 1:00:24.188 ********** 2026-04-05 06:14:14.476246 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:14:14.476257 | orchestrator | 2026-04-05 06:14:14.476268 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:14:14.476279 | orchestrator | Sunday 05 April 2026 06:13:48 +0000 (0:00:01.440) 1:00:25.629 ********** 2026-04-05 06:14:14.476289 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476300 | orchestrator | 2026-04-05 06:14:14.476314 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:14:14.476332 | orchestrator | Sunday 05 April 2026 06:13:50 +0000 (0:00:01.125) 1:00:26.755 ********** 2026-04-05 06:14:14.476350 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476369 | orchestrator | 2026-04-05 06:14:14.476386 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:14:14.476433 | orchestrator | Sunday 05 April 2026 06:13:51 +0000 (0:00:01.355) 1:00:28.110 ********** 2026-04-05 06:14:14.476452 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476470 | orchestrator | 2026-04-05 06:14:14.476488 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 06:14:14.476507 | orchestrator | Sunday 05 April 2026 06:13:52 +0000 (0:00:01.187) 1:00:29.298 ********** 2026-04-05 06:14:14.476527 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-05 06:14:14.476546 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-05 06:14:14.476566 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-05 06:14:14.476584 | orchestrator | 2026-04-05 06:14:14.476603 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 06:14:14.476621 | orchestrator | Sunday 05 April 2026 06:13:55 +0000 (0:00:02.589) 1:00:31.887 ********** 2026-04-05 06:14:14.476634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-05 06:14:14.476648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-05 06:14:14.476660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-05 06:14:14.476673 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476685 | orchestrator | 2026-04-05 06:14:14.476698 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 06:14:14.476711 | orchestrator | Sunday 05 April 2026 06:13:56 +0000 (0:00:01.176) 1:00:33.064 ********** 2026-04-05 06:14:14.476741 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-05 06:14:14.476763 | orchestrator | 2026-04-05 06:14:14.476782 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:14:14.476802 | orchestrator | Sunday 05 April 2026 06:13:57 +0000 (0:00:01.434) 1:00:34.499 ********** 2026-04-05 06:14:14.476821 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476836 | orchestrator | 2026-04-05 06:14:14.476855 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:14:14.476873 | orchestrator | Sunday 05 April 2026 06:13:58 +0000 (0:00:01.147) 1:00:35.647 ********** 2026-04-05 06:14:14.476909 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476927 | orchestrator | 2026-04-05 06:14:14.476938 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:14:14.476949 | orchestrator | Sunday 05 April 2026 06:14:00 +0000 (0:00:01.156) 1:00:36.803 ********** 2026-04-05 06:14:14.476959 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.476970 | orchestrator | 2026-04-05 06:14:14.476980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:14:14.476991 | orchestrator | Sunday 05 April 2026 06:14:01 +0000 (0:00:01.190) 1:00:37.993 ********** 2026-04-05 06:14:14.477001 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:14:14.477012 | orchestrator | 2026-04-05 06:14:14.477022 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:14:14.477033 | orchestrator | Sunday 05 April 2026 06:14:02 +0000 (0:00:01.205) 1:00:39.198 ********** 2026-04-05 06:14:14.477043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 06:14:14.477060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 06:14:14.477078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 06:14:14.477096 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.477115 | orchestrator | 2026-04-05 06:14:14.477133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:14:14.477152 | orchestrator | Sunday 05 April 2026 06:14:03 +0000 (0:00:01.397) 1:00:40.596 ********** 2026-04-05 06:14:14.477172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 06:14:14.477190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 06:14:14.477207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 06:14:14.477226 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.477244 | orchestrator | 2026-04-05 06:14:14.477262 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:14:14.477292 | orchestrator | Sunday 05 April 2026 06:14:05 +0000 (0:00:01.474) 1:00:42.070 ********** 2026-04-05 06:14:14.477313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 06:14:14.477327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 06:14:14.477344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 06:14:14.477362 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:14:14.477380 | orchestrator | 2026-04-05 06:14:14.477426 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:14:14.477446 | orchestrator | Sunday 05 April 2026 06:14:06 +0000 (0:00:01.493) 1:00:43.564 ********** 2026-04-05 06:14:14.477465 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:14:14.477484 | orchestrator | 2026-04-05 06:14:14.477502 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:14:14.477520 | orchestrator | Sunday 05 April 2026 06:14:07 +0000 (0:00:01.141) 1:00:44.706 ********** 2026-04-05 06:14:14.477539 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 06:14:14.477558 | orchestrator | 2026-04-05 06:14:14.477575 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 06:14:14.477594 | orchestrator | Sunday 05 April 2026 06:14:09 +0000 (0:00:01.415) 1:00:46.121 ********** 2026-04-05 06:14:14.477614 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:14:14.477631 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:14:14.477649 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:14:14.477666 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 06:14:14.477683 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:14:14.477702 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:14:14.477733 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:14:14.477752 | orchestrator | 2026-04-05 06:14:14.477770 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 06:14:14.477787 | orchestrator | Sunday 05 April 2026 06:14:11 +0000 (0:00:02.440) 1:00:48.562 ********** 2026-04-05 06:14:14.477804 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:14:14.477822 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:14:14.477839 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:14:14.477858 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-05 06:14:14.477877 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:14:14.477895 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:14:14.477913 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:14:14.477931 | orchestrator | 2026-04-05 06:14:14.477965 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-05 06:15:09.369558 | orchestrator | Sunday 05 April 2026 06:14:15 +0000 (0:00:03.323) 1:00:51.885 ********** 2026-04-05 06:15:09.369714 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:15:09.369740 | orchestrator | 2026-04-05 06:15:09.369753 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-05 06:15:09.369764 | orchestrator | Sunday 05 April 2026 06:14:17 +0000 (0:00:02.300) 1:00:54.186 ********** 2026-04-05 06:15:09.369776 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:15:09.369788 | orchestrator | 2026-04-05 06:15:09.369799 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-05 06:15:09.369810 | orchestrator | Sunday 05 April 2026 06:14:20 +0000 (0:00:03.004) 1:00:57.190 ********** 2026-04-05 06:15:09.369822 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:15:09.369832 | orchestrator | 2026-04-05 06:15:09.369843 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 06:15:09.369854 | orchestrator | Sunday 05 April 2026 06:14:22 +0000 (0:00:02.257) 1:00:59.447 ********** 2026-04-05 06:15:09.369864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-05 06:15:09.369875 | orchestrator | 2026-04-05 06:15:09.369886 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 06:15:09.369897 | orchestrator | Sunday 05 April 2026 06:14:23 +0000 (0:00:01.144) 1:01:00.592 ********** 2026-04-05 06:15:09.369907 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-05 06:15:09.369918 | orchestrator | 2026-04-05 06:15:09.369929 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 06:15:09.369939 | orchestrator | Sunday 05 April 2026 06:14:25 +0000 (0:00:01.140) 1:01:01.732 ********** 2026-04-05 06:15:09.369950 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.369961 | orchestrator | 2026-04-05 06:15:09.369980 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 06:15:09.369998 | orchestrator | Sunday 05 April 2026 06:14:26 +0000 (0:00:01.156) 1:01:02.890 ********** 2026-04-05 06:15:09.370090 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370114 | orchestrator | 2026-04-05 06:15:09.370131 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 06:15:09.370167 | orchestrator | Sunday 05 April 2026 06:14:27 +0000 (0:00:01.517) 1:01:04.408 ********** 2026-04-05 06:15:09.370185 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370201 | orchestrator | 2026-04-05 06:15:09.370307 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 06:15:09.370327 | orchestrator | Sunday 05 April 2026 06:14:29 +0000 (0:00:01.585) 1:01:05.993 ********** 2026-04-05 06:15:09.370343 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370359 | orchestrator | 2026-04-05 06:15:09.370373 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 06:15:09.370389 | orchestrator | Sunday 05 April 2026 06:14:30 +0000 (0:00:01.617) 1:01:07.610 ********** 2026-04-05 06:15:09.370403 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.370419 | orchestrator | 2026-04-05 06:15:09.370433 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 06:15:09.370448 | orchestrator | Sunday 05 April 2026 06:14:32 +0000 (0:00:01.143) 1:01:08.754 ********** 2026-04-05 06:15:09.370464 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.370480 | orchestrator | 2026-04-05 06:15:09.370495 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 06:15:09.370511 | orchestrator | Sunday 05 April 2026 06:14:33 +0000 (0:00:01.275) 1:01:10.030 ********** 2026-04-05 06:15:09.370526 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.370542 | orchestrator | 2026-04-05 06:15:09.370558 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 06:15:09.370574 | orchestrator | Sunday 05 April 2026 06:14:34 +0000 (0:00:01.148) 1:01:11.178 ********** 2026-04-05 06:15:09.370590 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370605 | orchestrator | 2026-04-05 06:15:09.370620 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 06:15:09.370635 | orchestrator | Sunday 05 April 2026 06:14:35 +0000 (0:00:01.519) 1:01:12.698 ********** 2026-04-05 06:15:09.370651 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370666 | orchestrator | 2026-04-05 06:15:09.370682 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 06:15:09.370698 | orchestrator | Sunday 05 April 2026 06:14:37 +0000 (0:00:01.547) 1:01:14.245 ********** 2026-04-05 06:15:09.370713 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.370729 | orchestrator | 2026-04-05 06:15:09.370744 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 06:15:09.370760 | orchestrator | Sunday 05 April 2026 06:14:38 +0000 (0:00:01.202) 1:01:15.448 ********** 2026-04-05 06:15:09.370775 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.370790 | orchestrator | 2026-04-05 06:15:09.370804 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 06:15:09.370820 | orchestrator | Sunday 05 April 2026 06:14:39 +0000 (0:00:01.171) 1:01:16.620 ********** 2026-04-05 06:15:09.370836 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370851 | orchestrator | 2026-04-05 06:15:09.370866 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 06:15:09.370882 | orchestrator | Sunday 05 April 2026 06:14:41 +0000 (0:00:01.148) 1:01:17.769 ********** 2026-04-05 06:15:09.370897 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370913 | orchestrator | 2026-04-05 06:15:09.370929 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 06:15:09.370945 | orchestrator | Sunday 05 April 2026 06:14:42 +0000 (0:00:01.213) 1:01:18.983 ********** 2026-04-05 06:15:09.370960 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.370977 | orchestrator | 2026-04-05 06:15:09.371016 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 06:15:09.371033 | orchestrator | Sunday 05 April 2026 06:14:43 +0000 (0:00:01.198) 1:01:20.181 ********** 2026-04-05 06:15:09.371049 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371066 | orchestrator | 2026-04-05 06:15:09.371081 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 06:15:09.371096 | orchestrator | Sunday 05 April 2026 06:14:44 +0000 (0:00:01.137) 1:01:21.318 ********** 2026-04-05 06:15:09.371112 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371127 | orchestrator | 2026-04-05 06:15:09.371143 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 06:15:09.371173 | orchestrator | Sunday 05 April 2026 06:14:45 +0000 (0:00:01.192) 1:01:22.511 ********** 2026-04-05 06:15:09.371190 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371206 | orchestrator | 2026-04-05 06:15:09.371222 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 06:15:09.371260 | orchestrator | Sunday 05 April 2026 06:14:46 +0000 (0:00:01.185) 1:01:23.696 ********** 2026-04-05 06:15:09.371277 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.371293 | orchestrator | 2026-04-05 06:15:09.371308 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 06:15:09.371323 | orchestrator | Sunday 05 April 2026 06:14:48 +0000 (0:00:01.204) 1:01:24.900 ********** 2026-04-05 06:15:09.371339 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.371355 | orchestrator | 2026-04-05 06:15:09.371371 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 06:15:09.371385 | orchestrator | Sunday 05 April 2026 06:14:49 +0000 (0:00:01.446) 1:01:26.347 ********** 2026-04-05 06:15:09.371399 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371412 | orchestrator | 2026-04-05 06:15:09.371429 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 06:15:09.371445 | orchestrator | Sunday 05 April 2026 06:14:50 +0000 (0:00:01.227) 1:01:27.575 ********** 2026-04-05 06:15:09.371461 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371477 | orchestrator | 2026-04-05 06:15:09.371493 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 06:15:09.371509 | orchestrator | Sunday 05 April 2026 06:14:51 +0000 (0:00:01.123) 1:01:28.698 ********** 2026-04-05 06:15:09.371526 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371543 | orchestrator | 2026-04-05 06:15:09.371559 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 06:15:09.371575 | orchestrator | Sunday 05 April 2026 06:14:53 +0000 (0:00:01.165) 1:01:29.864 ********** 2026-04-05 06:15:09.371591 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371607 | orchestrator | 2026-04-05 06:15:09.371622 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 06:15:09.371648 | orchestrator | Sunday 05 April 2026 06:14:54 +0000 (0:00:01.126) 1:01:30.990 ********** 2026-04-05 06:15:09.371664 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371680 | orchestrator | 2026-04-05 06:15:09.371696 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 06:15:09.371710 | orchestrator | Sunday 05 April 2026 06:14:55 +0000 (0:00:01.279) 1:01:32.269 ********** 2026-04-05 06:15:09.371725 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371740 | orchestrator | 2026-04-05 06:15:09.371756 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 06:15:09.371772 | orchestrator | Sunday 05 April 2026 06:14:56 +0000 (0:00:01.239) 1:01:33.509 ********** 2026-04-05 06:15:09.371787 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371803 | orchestrator | 2026-04-05 06:15:09.371820 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 06:15:09.371835 | orchestrator | Sunday 05 April 2026 06:14:57 +0000 (0:00:01.174) 1:01:34.684 ********** 2026-04-05 06:15:09.371851 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371868 | orchestrator | 2026-04-05 06:15:09.371884 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 06:15:09.371901 | orchestrator | Sunday 05 April 2026 06:14:59 +0000 (0:00:01.130) 1:01:35.815 ********** 2026-04-05 06:15:09.371917 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371934 | orchestrator | 2026-04-05 06:15:09.371951 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 06:15:09.371966 | orchestrator | Sunday 05 April 2026 06:15:00 +0000 (0:00:01.159) 1:01:36.975 ********** 2026-04-05 06:15:09.371981 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.371997 | orchestrator | 2026-04-05 06:15:09.372028 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 06:15:09.372044 | orchestrator | Sunday 05 April 2026 06:15:01 +0000 (0:00:01.171) 1:01:38.146 ********** 2026-04-05 06:15:09.372054 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.372063 | orchestrator | 2026-04-05 06:15:09.372073 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 06:15:09.372082 | orchestrator | Sunday 05 April 2026 06:15:02 +0000 (0:00:01.130) 1:01:39.276 ********** 2026-04-05 06:15:09.372091 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:09.372101 | orchestrator | 2026-04-05 06:15:09.372110 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 06:15:09.372117 | orchestrator | Sunday 05 April 2026 06:15:04 +0000 (0:00:01.505) 1:01:40.781 ********** 2026-04-05 06:15:09.372125 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.372133 | orchestrator | 2026-04-05 06:15:09.372140 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 06:15:09.372148 | orchestrator | Sunday 05 April 2026 06:15:06 +0000 (0:00:01.969) 1:01:42.751 ********** 2026-04-05 06:15:09.372156 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:09.372163 | orchestrator | 2026-04-05 06:15:09.372171 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 06:15:09.372179 | orchestrator | Sunday 05 April 2026 06:15:08 +0000 (0:00:02.190) 1:01:44.941 ********** 2026-04-05 06:15:09.372186 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-05 06:15:09.372195 | orchestrator | 2026-04-05 06:15:09.372202 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 06:15:09.372221 | orchestrator | Sunday 05 April 2026 06:15:09 +0000 (0:00:01.132) 1:01:46.075 ********** 2026-04-05 06:15:56.696002 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696145 | orchestrator | 2026-04-05 06:15:56.696164 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 06:15:56.696176 | orchestrator | Sunday 05 April 2026 06:15:10 +0000 (0:00:01.231) 1:01:47.306 ********** 2026-04-05 06:15:56.696188 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696198 | orchestrator | 2026-04-05 06:15:56.696210 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 06:15:56.696221 | orchestrator | Sunday 05 April 2026 06:15:11 +0000 (0:00:01.141) 1:01:48.448 ********** 2026-04-05 06:15:56.696232 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 06:15:56.696243 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 06:15:56.696255 | orchestrator | 2026-04-05 06:15:56.696265 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 06:15:56.696276 | orchestrator | Sunday 05 April 2026 06:15:13 +0000 (0:00:01.835) 1:01:50.283 ********** 2026-04-05 06:15:56.696287 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:56.696298 | orchestrator | 2026-04-05 06:15:56.696309 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 06:15:56.696320 | orchestrator | Sunday 05 April 2026 06:15:15 +0000 (0:00:01.442) 1:01:51.725 ********** 2026-04-05 06:15:56.696330 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696341 | orchestrator | 2026-04-05 06:15:56.696352 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 06:15:56.696362 | orchestrator | Sunday 05 April 2026 06:15:16 +0000 (0:00:01.130) 1:01:52.856 ********** 2026-04-05 06:15:56.696373 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696384 | orchestrator | 2026-04-05 06:15:56.696394 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 06:15:56.696405 | orchestrator | Sunday 05 April 2026 06:15:17 +0000 (0:00:01.139) 1:01:53.996 ********** 2026-04-05 06:15:56.696416 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696426 | orchestrator | 2026-04-05 06:15:56.696437 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 06:15:56.696472 | orchestrator | Sunday 05 April 2026 06:15:18 +0000 (0:00:01.188) 1:01:55.185 ********** 2026-04-05 06:15:56.696483 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-05 06:15:56.696495 | orchestrator | 2026-04-05 06:15:56.696505 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 06:15:56.696530 | orchestrator | Sunday 05 April 2026 06:15:19 +0000 (0:00:01.349) 1:01:56.534 ********** 2026-04-05 06:15:56.696543 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:56.696556 | orchestrator | 2026-04-05 06:15:56.696569 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 06:15:56.696581 | orchestrator | Sunday 05 April 2026 06:15:21 +0000 (0:00:01.728) 1:01:58.263 ********** 2026-04-05 06:15:56.696593 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 06:15:56.696606 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 06:15:56.696618 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 06:15:56.696631 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696643 | orchestrator | 2026-04-05 06:15:56.696656 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 06:15:56.696668 | orchestrator | Sunday 05 April 2026 06:15:22 +0000 (0:00:01.148) 1:01:59.412 ********** 2026-04-05 06:15:56.696681 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696693 | orchestrator | 2026-04-05 06:15:56.696706 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 06:15:56.696718 | orchestrator | Sunday 05 April 2026 06:15:23 +0000 (0:00:01.146) 1:02:00.559 ********** 2026-04-05 06:15:56.696731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696743 | orchestrator | 2026-04-05 06:15:56.696756 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 06:15:56.696769 | orchestrator | Sunday 05 April 2026 06:15:25 +0000 (0:00:01.197) 1:02:01.756 ********** 2026-04-05 06:15:56.696781 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696794 | orchestrator | 2026-04-05 06:15:56.696807 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 06:15:56.696819 | orchestrator | Sunday 05 April 2026 06:15:26 +0000 (0:00:01.148) 1:02:02.905 ********** 2026-04-05 06:15:56.696832 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696845 | orchestrator | 2026-04-05 06:15:56.696857 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 06:15:56.696869 | orchestrator | Sunday 05 April 2026 06:15:27 +0000 (0:00:01.123) 1:02:04.029 ********** 2026-04-05 06:15:56.696882 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.696895 | orchestrator | 2026-04-05 06:15:56.696907 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 06:15:56.696918 | orchestrator | Sunday 05 April 2026 06:15:28 +0000 (0:00:01.135) 1:02:05.165 ********** 2026-04-05 06:15:56.696929 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:56.696939 | orchestrator | 2026-04-05 06:15:56.696950 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 06:15:56.696961 | orchestrator | Sunday 05 April 2026 06:15:30 +0000 (0:00:02.513) 1:02:07.678 ********** 2026-04-05 06:15:56.696971 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:56.696982 | orchestrator | 2026-04-05 06:15:56.696993 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 06:15:56.697004 | orchestrator | Sunday 05 April 2026 06:15:32 +0000 (0:00:01.225) 1:02:08.903 ********** 2026-04-05 06:15:56.697014 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-05 06:15:56.697025 | orchestrator | 2026-04-05 06:15:56.697036 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 06:15:56.697064 | orchestrator | Sunday 05 April 2026 06:15:33 +0000 (0:00:01.112) 1:02:10.015 ********** 2026-04-05 06:15:56.697083 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697094 | orchestrator | 2026-04-05 06:15:56.697121 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 06:15:56.697132 | orchestrator | Sunday 05 April 2026 06:15:34 +0000 (0:00:01.146) 1:02:11.162 ********** 2026-04-05 06:15:56.697143 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697154 | orchestrator | 2026-04-05 06:15:56.697165 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 06:15:56.697176 | orchestrator | Sunday 05 April 2026 06:15:35 +0000 (0:00:01.310) 1:02:12.473 ********** 2026-04-05 06:15:56.697187 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697197 | orchestrator | 2026-04-05 06:15:56.697208 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 06:15:56.697219 | orchestrator | Sunday 05 April 2026 06:15:36 +0000 (0:00:01.161) 1:02:13.634 ********** 2026-04-05 06:15:56.697230 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697241 | orchestrator | 2026-04-05 06:15:56.697251 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 06:15:56.697262 | orchestrator | Sunday 05 April 2026 06:15:38 +0000 (0:00:01.196) 1:02:14.831 ********** 2026-04-05 06:15:56.697273 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697284 | orchestrator | 2026-04-05 06:15:56.697294 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 06:15:56.697305 | orchestrator | Sunday 05 April 2026 06:15:39 +0000 (0:00:01.169) 1:02:16.000 ********** 2026-04-05 06:15:56.697316 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697326 | orchestrator | 2026-04-05 06:15:56.697337 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 06:15:56.697348 | orchestrator | Sunday 05 April 2026 06:15:40 +0000 (0:00:01.249) 1:02:17.250 ********** 2026-04-05 06:15:56.697358 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697369 | orchestrator | 2026-04-05 06:15:56.697380 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 06:15:56.697391 | orchestrator | Sunday 05 April 2026 06:15:41 +0000 (0:00:01.174) 1:02:18.424 ********** 2026-04-05 06:15:56.697401 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:15:56.697412 | orchestrator | 2026-04-05 06:15:56.697423 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 06:15:56.697434 | orchestrator | Sunday 05 April 2026 06:15:42 +0000 (0:00:01.201) 1:02:19.625 ********** 2026-04-05 06:15:56.697444 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:15:56.697455 | orchestrator | 2026-04-05 06:15:56.697466 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 06:15:56.697481 | orchestrator | Sunday 05 April 2026 06:15:44 +0000 (0:00:01.187) 1:02:20.813 ********** 2026-04-05 06:15:56.697492 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-05 06:15:56.697503 | orchestrator | 2026-04-05 06:15:56.697514 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 06:15:56.697524 | orchestrator | Sunday 05 April 2026 06:15:45 +0000 (0:00:01.212) 1:02:22.026 ********** 2026-04-05 06:15:56.697535 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-05 06:15:56.697546 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-05 06:15:56.697557 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-05 06:15:56.697568 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-05 06:15:56.697578 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-05 06:15:56.697589 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-05 06:15:56.697599 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-05 06:15:56.697610 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-05 06:15:56.697621 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 06:15:56.697632 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 06:15:56.697650 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 06:15:56.697660 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 06:15:56.697671 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 06:15:56.697681 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 06:15:56.697692 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-05 06:15:56.697703 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-05 06:15:56.697714 | orchestrator | 2026-04-05 06:15:56.697724 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 06:15:56.697735 | orchestrator | Sunday 05 April 2026 06:15:51 +0000 (0:00:06.541) 1:02:28.567 ********** 2026-04-05 06:15:56.697746 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-05 06:15:56.697756 | orchestrator | 2026-04-05 06:15:56.697767 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 06:15:56.697778 | orchestrator | Sunday 05 April 2026 06:15:53 +0000 (0:00:01.327) 1:02:29.895 ********** 2026-04-05 06:15:56.697789 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:15:56.697801 | orchestrator | 2026-04-05 06:15:56.697812 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 06:15:56.697823 | orchestrator | Sunday 05 April 2026 06:15:54 +0000 (0:00:01.516) 1:02:31.411 ********** 2026-04-05 06:15:56.697833 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:15:56.697844 | orchestrator | 2026-04-05 06:15:56.697855 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 06:15:56.697872 | orchestrator | Sunday 05 April 2026 06:15:56 +0000 (0:00:01.991) 1:02:33.403 ********** 2026-04-05 06:16:47.088559 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.088671 | orchestrator | 2026-04-05 06:16:47.088687 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 06:16:47.088700 | orchestrator | Sunday 05 April 2026 06:15:57 +0000 (0:00:01.138) 1:02:34.541 ********** 2026-04-05 06:16:47.088712 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.088722 | orchestrator | 2026-04-05 06:16:47.088734 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 06:16:47.088745 | orchestrator | Sunday 05 April 2026 06:15:58 +0000 (0:00:01.134) 1:02:35.676 ********** 2026-04-05 06:16:47.088755 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.088766 | orchestrator | 2026-04-05 06:16:47.088777 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 06:16:47.088788 | orchestrator | Sunday 05 April 2026 06:16:00 +0000 (0:00:01.199) 1:02:36.875 ********** 2026-04-05 06:16:47.088799 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.088810 | orchestrator | 2026-04-05 06:16:47.088820 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 06:16:47.088831 | orchestrator | Sunday 05 April 2026 06:16:01 +0000 (0:00:01.187) 1:02:38.063 ********** 2026-04-05 06:16:47.088842 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.088853 | orchestrator | 2026-04-05 06:16:47.088864 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 06:16:47.088876 | orchestrator | Sunday 05 April 2026 06:16:02 +0000 (0:00:01.151) 1:02:39.214 ********** 2026-04-05 06:16:47.088888 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.088899 | orchestrator | 2026-04-05 06:16:47.088910 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 06:16:47.088921 | orchestrator | Sunday 05 April 2026 06:16:03 +0000 (0:00:01.168) 1:02:40.383 ********** 2026-04-05 06:16:47.088932 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.088968 | orchestrator | 2026-04-05 06:16:47.089030 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 06:16:47.089042 | orchestrator | Sunday 05 April 2026 06:16:04 +0000 (0:00:01.156) 1:02:41.539 ********** 2026-04-05 06:16:47.089053 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089063 | orchestrator | 2026-04-05 06:16:47.089074 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 06:16:47.089085 | orchestrator | Sunday 05 April 2026 06:16:05 +0000 (0:00:01.138) 1:02:42.677 ********** 2026-04-05 06:16:47.089110 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089124 | orchestrator | 2026-04-05 06:16:47.089138 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 06:16:47.089151 | orchestrator | Sunday 05 April 2026 06:16:07 +0000 (0:00:01.209) 1:02:43.887 ********** 2026-04-05 06:16:47.089163 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089176 | orchestrator | 2026-04-05 06:16:47.089188 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 06:16:47.089201 | orchestrator | Sunday 05 April 2026 06:16:08 +0000 (0:00:01.146) 1:02:45.033 ********** 2026-04-05 06:16:47.089214 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089226 | orchestrator | 2026-04-05 06:16:47.089239 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 06:16:47.089251 | orchestrator | Sunday 05 April 2026 06:16:09 +0000 (0:00:01.299) 1:02:46.333 ********** 2026-04-05 06:16:47.089264 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-05 06:16:47.089277 | orchestrator | 2026-04-05 06:16:47.089289 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 06:16:47.089301 | orchestrator | Sunday 05 April 2026 06:16:13 +0000 (0:00:04.285) 1:02:50.619 ********** 2026-04-05 06:16:47.089314 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:16:47.089328 | orchestrator | 2026-04-05 06:16:47.089341 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 06:16:47.089353 | orchestrator | Sunday 05 April 2026 06:16:15 +0000 (0:00:01.228) 1:02:51.847 ********** 2026-04-05 06:16:47.089368 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-05 06:16:47.089385 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-05 06:16:47.089399 | orchestrator | 2026-04-05 06:16:47.089410 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 06:16:47.089421 | orchestrator | Sunday 05 April 2026 06:16:20 +0000 (0:00:04.928) 1:02:56.776 ********** 2026-04-05 06:16:47.089432 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089442 | orchestrator | 2026-04-05 06:16:47.089453 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 06:16:47.089464 | orchestrator | Sunday 05 April 2026 06:16:21 +0000 (0:00:01.136) 1:02:57.913 ********** 2026-04-05 06:16:47.089474 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089485 | orchestrator | 2026-04-05 06:16:47.089496 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:16:47.089525 | orchestrator | Sunday 05 April 2026 06:16:22 +0000 (0:00:01.163) 1:02:59.076 ********** 2026-04-05 06:16:47.089537 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089547 | orchestrator | 2026-04-05 06:16:47.089567 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:16:47.089578 | orchestrator | Sunday 05 April 2026 06:16:23 +0000 (0:00:01.245) 1:03:00.322 ********** 2026-04-05 06:16:47.089589 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089599 | orchestrator | 2026-04-05 06:16:47.089610 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:16:47.089621 | orchestrator | Sunday 05 April 2026 06:16:24 +0000 (0:00:01.141) 1:03:01.463 ********** 2026-04-05 06:16:47.089632 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089643 | orchestrator | 2026-04-05 06:16:47.089654 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:16:47.089664 | orchestrator | Sunday 05 April 2026 06:16:25 +0000 (0:00:01.208) 1:03:02.672 ********** 2026-04-05 06:16:47.089675 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:16:47.089686 | orchestrator | 2026-04-05 06:16:47.089697 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:16:47.089708 | orchestrator | Sunday 05 April 2026 06:16:27 +0000 (0:00:01.329) 1:03:04.001 ********** 2026-04-05 06:16:47.089719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 06:16:47.089730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 06:16:47.089741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 06:16:47.089751 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089762 | orchestrator | 2026-04-05 06:16:47.089773 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:16:47.089784 | orchestrator | Sunday 05 April 2026 06:16:28 +0000 (0:00:01.446) 1:03:05.447 ********** 2026-04-05 06:16:47.089794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 06:16:47.089805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 06:16:47.089815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 06:16:47.089826 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089837 | orchestrator | 2026-04-05 06:16:47.089848 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:16:47.089859 | orchestrator | Sunday 05 April 2026 06:16:30 +0000 (0:00:01.963) 1:03:07.411 ********** 2026-04-05 06:16:47.089869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-05 06:16:47.089885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-05 06:16:47.089896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-05 06:16:47.089906 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.089917 | orchestrator | 2026-04-05 06:16:47.089928 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:16:47.089938 | orchestrator | Sunday 05 April 2026 06:16:32 +0000 (0:00:01.939) 1:03:09.350 ********** 2026-04-05 06:16:47.089949 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:16:47.089960 | orchestrator | 2026-04-05 06:16:47.089971 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:16:47.090015 | orchestrator | Sunday 05 April 2026 06:16:34 +0000 (0:00:01.388) 1:03:10.739 ********** 2026-04-05 06:16:47.090084 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-05 06:16:47.090095 | orchestrator | 2026-04-05 06:16:47.090106 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 06:16:47.090117 | orchestrator | Sunday 05 April 2026 06:16:35 +0000 (0:00:01.413) 1:03:12.152 ********** 2026-04-05 06:16:47.090127 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:16:47.090138 | orchestrator | 2026-04-05 06:16:47.090149 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-05 06:16:47.090160 | orchestrator | Sunday 05 April 2026 06:16:37 +0000 (0:00:01.769) 1:03:13.921 ********** 2026-04-05 06:16:47.090171 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-04-05 06:16:47.090181 | orchestrator | 2026-04-05 06:16:47.090192 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 06:16:47.090211 | orchestrator | Sunday 05 April 2026 06:16:38 +0000 (0:00:01.514) 1:03:15.435 ********** 2026-04-05 06:16:47.090222 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:16:47.090233 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 06:16:47.090244 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:16:47.090255 | orchestrator | 2026-04-05 06:16:47.090266 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:16:47.090276 | orchestrator | Sunday 05 April 2026 06:16:41 +0000 (0:00:03.187) 1:03:18.622 ********** 2026-04-05 06:16:47.090287 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 06:16:47.090298 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-05 06:16:47.090308 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:16:47.090319 | orchestrator | 2026-04-05 06:16:47.090330 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-05 06:16:47.090341 | orchestrator | Sunday 05 April 2026 06:16:43 +0000 (0:00:01.950) 1:03:20.573 ********** 2026-04-05 06:16:47.090352 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:16:47.090362 | orchestrator | 2026-04-05 06:16:47.090373 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-05 06:16:47.090384 | orchestrator | Sunday 05 April 2026 06:16:45 +0000 (0:00:01.159) 1:03:21.733 ********** 2026-04-05 06:16:47.090395 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-04-05 06:16:47.090407 | orchestrator | 2026-04-05 06:16:47.090418 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-05 06:16:47.090428 | orchestrator | Sunday 05 April 2026 06:16:46 +0000 (0:00:01.495) 1:03:23.228 ********** 2026-04-05 06:16:47.090447 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:18:04.668881 | orchestrator | 2026-04-05 06:18:04.668998 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-05 06:18:04.669017 | orchestrator | Sunday 05 April 2026 06:16:48 +0000 (0:00:01.638) 1:03:24.866 ********** 2026-04-05 06:18:04.669029 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:18:04.669042 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 06:18:04.669054 | orchestrator | 2026-04-05 06:18:04.669065 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 06:18:04.669076 | orchestrator | Sunday 05 April 2026 06:16:53 +0000 (0:00:05.610) 1:03:30.477 ********** 2026-04-05 06:18:04.669087 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:18:04.669098 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:18:04.669109 | orchestrator | 2026-04-05 06:18:04.669120 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:18:04.669131 | orchestrator | Sunday 05 April 2026 06:16:57 +0000 (0:00:03.664) 1:03:34.142 ********** 2026-04-05 06:18:04.669142 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 06:18:04.669153 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:18:04.669165 | orchestrator | 2026-04-05 06:18:04.669175 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-05 06:18:04.669187 | orchestrator | Sunday 05 April 2026 06:16:59 +0000 (0:00:02.001) 1:03:36.144 ********** 2026-04-05 06:18:04.669198 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-05 06:18:04.669209 | orchestrator | 2026-04-05 06:18:04.669219 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-05 06:18:04.669230 | orchestrator | Sunday 05 April 2026 06:17:00 +0000 (0:00:01.561) 1:03:37.706 ********** 2026-04-05 06:18:04.669241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669343 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:18:04.669356 | orchestrator | 2026-04-05 06:18:04.669369 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-05 06:18:04.669382 | orchestrator | Sunday 05 April 2026 06:17:02 +0000 (0:00:01.695) 1:03:39.401 ********** 2026-04-05 06:18:04.669395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:18:04.669457 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:18:04.669468 | orchestrator | 2026-04-05 06:18:04.669479 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-05 06:18:04.669489 | orchestrator | Sunday 05 April 2026 06:17:04 +0000 (0:00:01.674) 1:03:41.076 ********** 2026-04-05 06:18:04.669499 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:18:04.669512 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:18:04.669523 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:18:04.669533 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:18:04.669546 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:18:04.669556 | orchestrator | 2026-04-05 06:18:04.669567 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-05 06:18:04.669596 | orchestrator | Sunday 05 April 2026 06:17:36 +0000 (0:00:32.212) 1:04:13.289 ********** 2026-04-05 06:18:04.669607 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:18:04.669619 | orchestrator | 2026-04-05 06:18:04.669629 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-05 06:18:04.669640 | orchestrator | Sunday 05 April 2026 06:17:37 +0000 (0:00:01.120) 1:04:14.409 ********** 2026-04-05 06:18:04.669651 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:18:04.669661 | orchestrator | 2026-04-05 06:18:04.669672 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-05 06:18:04.669682 | orchestrator | Sunday 05 April 2026 06:17:38 +0000 (0:00:01.127) 1:04:15.536 ********** 2026-04-05 06:18:04.669701 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-04-05 06:18:04.669712 | orchestrator | 2026-04-05 06:18:04.669723 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-05 06:18:04.669733 | orchestrator | Sunday 05 April 2026 06:17:40 +0000 (0:00:01.497) 1:04:17.034 ********** 2026-04-05 06:18:04.669743 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-04-05 06:18:04.669754 | orchestrator | 2026-04-05 06:18:04.669764 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-05 06:18:04.669775 | orchestrator | Sunday 05 April 2026 06:17:41 +0000 (0:00:01.450) 1:04:18.484 ********** 2026-04-05 06:18:04.669785 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:18:04.669796 | orchestrator | 2026-04-05 06:18:04.669832 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-05 06:18:04.669843 | orchestrator | Sunday 05 April 2026 06:17:44 +0000 (0:00:02.576) 1:04:21.061 ********** 2026-04-05 06:18:04.669853 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:18:04.669864 | orchestrator | 2026-04-05 06:18:04.669875 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-05 06:18:04.669885 | orchestrator | Sunday 05 April 2026 06:17:46 +0000 (0:00:01.997) 1:04:23.059 ********** 2026-04-05 06:18:04.669896 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:18:04.669906 | orchestrator | 2026-04-05 06:18:04.669917 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-05 06:18:04.669927 | orchestrator | Sunday 05 April 2026 06:17:48 +0000 (0:00:02.223) 1:04:25.282 ********** 2026-04-05 06:18:04.669938 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-05 06:18:04.669949 | orchestrator | 2026-04-05 06:18:04.669965 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-05 06:18:04.669976 | orchestrator | 2026-04-05 06:18:04.669986 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:18:04.669997 | orchestrator | Sunday 05 April 2026 06:17:51 +0000 (0:00:02.979) 1:04:28.262 ********** 2026-04-05 06:18:04.670007 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-05 06:18:04.670077 | orchestrator | 2026-04-05 06:18:04.670089 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 06:18:04.670100 | orchestrator | Sunday 05 April 2026 06:17:52 +0000 (0:00:01.167) 1:04:29.430 ********** 2026-04-05 06:18:04.670110 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:04.670121 | orchestrator | 2026-04-05 06:18:04.670131 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 06:18:04.670142 | orchestrator | Sunday 05 April 2026 06:17:54 +0000 (0:00:01.464) 1:04:30.894 ********** 2026-04-05 06:18:04.670152 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:04.670163 | orchestrator | 2026-04-05 06:18:04.670173 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:18:04.670184 | orchestrator | Sunday 05 April 2026 06:17:55 +0000 (0:00:01.248) 1:04:32.142 ********** 2026-04-05 06:18:04.670194 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:04.670205 | orchestrator | 2026-04-05 06:18:04.670215 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:18:04.670226 | orchestrator | Sunday 05 April 2026 06:17:56 +0000 (0:00:01.486) 1:04:33.629 ********** 2026-04-05 06:18:04.670237 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:04.670247 | orchestrator | 2026-04-05 06:18:04.670258 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 06:18:04.670268 | orchestrator | Sunday 05 April 2026 06:17:58 +0000 (0:00:01.152) 1:04:34.781 ********** 2026-04-05 06:18:04.670279 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:04.670289 | orchestrator | 2026-04-05 06:18:04.670300 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 06:18:04.670311 | orchestrator | Sunday 05 April 2026 06:17:59 +0000 (0:00:01.331) 1:04:36.113 ********** 2026-04-05 06:18:04.670329 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:04.670340 | orchestrator | 2026-04-05 06:18:04.670350 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 06:18:04.670361 | orchestrator | Sunday 05 April 2026 06:18:00 +0000 (0:00:01.176) 1:04:37.289 ********** 2026-04-05 06:18:04.670372 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:04.670382 | orchestrator | 2026-04-05 06:18:04.670393 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 06:18:04.670403 | orchestrator | Sunday 05 April 2026 06:18:01 +0000 (0:00:01.196) 1:04:38.486 ********** 2026-04-05 06:18:04.670414 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:04.670425 | orchestrator | 2026-04-05 06:18:04.670435 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 06:18:04.670446 | orchestrator | Sunday 05 April 2026 06:18:02 +0000 (0:00:01.156) 1:04:39.643 ********** 2026-04-05 06:18:04.670456 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:18:04.670467 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:18:04.670478 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:18:04.670488 | orchestrator | 2026-04-05 06:18:04.670499 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 06:18:04.670518 | orchestrator | Sunday 05 April 2026 06:18:04 +0000 (0:00:01.734) 1:04:41.377 ********** 2026-04-05 06:18:30.131343 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:30.131472 | orchestrator | 2026-04-05 06:18:30.131496 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 06:18:30.131515 | orchestrator | Sunday 05 April 2026 06:18:05 +0000 (0:00:01.279) 1:04:42.657 ********** 2026-04-05 06:18:30.131532 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:18:30.131548 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:18:30.131564 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:18:30.131581 | orchestrator | 2026-04-05 06:18:30.131597 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 06:18:30.131613 | orchestrator | Sunday 05 April 2026 06:18:08 +0000 (0:00:02.917) 1:04:45.574 ********** 2026-04-05 06:18:30.131631 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 06:18:30.131647 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 06:18:30.131663 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 06:18:30.131680 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.131697 | orchestrator | 2026-04-05 06:18:30.131713 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 06:18:30.131728 | orchestrator | Sunday 05 April 2026 06:18:10 +0000 (0:00:01.500) 1:04:47.074 ********** 2026-04-05 06:18:30.131776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 06:18:30.131799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 06:18:30.131828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 06:18:30.131845 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.131860 | orchestrator | 2026-04-05 06:18:30.131877 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 06:18:30.131918 | orchestrator | Sunday 05 April 2026 06:18:12 +0000 (0:00:01.685) 1:04:48.760 ********** 2026-04-05 06:18:30.131938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:30.131957 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:30.131974 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:30.131990 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.132005 | orchestrator | 2026-04-05 06:18:30.132020 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 06:18:30.132036 | orchestrator | Sunday 05 April 2026 06:18:13 +0000 (0:00:01.175) 1:04:49.936 ********** 2026-04-05 06:18:30.132081 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 06:18:06.459191', 'end': '2026-04-05 06:18:06.508503', 'delta': '0:00:00.049312', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 06:18:30.132103 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 06:18:07.066370', 'end': '2026-04-05 06:18:07.113074', 'delta': '0:00:00.046704', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 06:18:30.132118 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 06:18:07.641177', 'end': '2026-04-05 06:18:07.689687', 'delta': '0:00:00.048510', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 06:18:30.132145 | orchestrator | 2026-04-05 06:18:30.132168 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 06:18:30.132185 | orchestrator | Sunday 05 April 2026 06:18:14 +0000 (0:00:01.201) 1:04:51.137 ********** 2026-04-05 06:18:30.132200 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:30.132216 | orchestrator | 2026-04-05 06:18:30.132233 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 06:18:30.132247 | orchestrator | Sunday 05 April 2026 06:18:15 +0000 (0:00:01.256) 1:04:52.393 ********** 2026-04-05 06:18:30.132264 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.132279 | orchestrator | 2026-04-05 06:18:30.132294 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 06:18:30.132310 | orchestrator | Sunday 05 April 2026 06:18:16 +0000 (0:00:01.243) 1:04:53.636 ********** 2026-04-05 06:18:30.132326 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:30.132342 | orchestrator | 2026-04-05 06:18:30.132356 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 06:18:30.132372 | orchestrator | Sunday 05 April 2026 06:18:18 +0000 (0:00:01.164) 1:04:54.801 ********** 2026-04-05 06:18:30.132388 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:18:30.132404 | orchestrator | 2026-04-05 06:18:30.132419 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:18:30.132436 | orchestrator | Sunday 05 April 2026 06:18:20 +0000 (0:00:02.695) 1:04:57.497 ********** 2026-04-05 06:18:30.132451 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:30.132467 | orchestrator | 2026-04-05 06:18:30.132482 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 06:18:30.132497 | orchestrator | Sunday 05 April 2026 06:18:21 +0000 (0:00:01.208) 1:04:58.705 ********** 2026-04-05 06:18:30.132512 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.132528 | orchestrator | 2026-04-05 06:18:30.132542 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 06:18:30.132557 | orchestrator | Sunday 05 April 2026 06:18:23 +0000 (0:00:01.110) 1:04:59.816 ********** 2026-04-05 06:18:30.132572 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.132587 | orchestrator | 2026-04-05 06:18:30.132601 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:18:30.132617 | orchestrator | Sunday 05 April 2026 06:18:24 +0000 (0:00:01.254) 1:05:01.071 ********** 2026-04-05 06:18:30.132632 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.132648 | orchestrator | 2026-04-05 06:18:30.132663 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 06:18:30.132678 | orchestrator | Sunday 05 April 2026 06:18:25 +0000 (0:00:01.132) 1:05:02.204 ********** 2026-04-05 06:18:30.132693 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.132707 | orchestrator | 2026-04-05 06:18:30.132722 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 06:18:30.132736 | orchestrator | Sunday 05 April 2026 06:18:26 +0000 (0:00:01.106) 1:05:03.310 ********** 2026-04-05 06:18:30.132794 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:30.132810 | orchestrator | 2026-04-05 06:18:30.132824 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 06:18:30.132839 | orchestrator | Sunday 05 April 2026 06:18:27 +0000 (0:00:01.189) 1:05:04.500 ********** 2026-04-05 06:18:30.132854 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:30.132869 | orchestrator | 2026-04-05 06:18:30.132884 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 06:18:30.132900 | orchestrator | Sunday 05 April 2026 06:18:28 +0000 (0:00:01.116) 1:05:05.616 ********** 2026-04-05 06:18:30.132916 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:30.132931 | orchestrator | 2026-04-05 06:18:30.132946 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 06:18:30.132976 | orchestrator | Sunday 05 April 2026 06:18:30 +0000 (0:00:01.223) 1:05:06.840 ********** 2026-04-05 06:18:32.622841 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:32.622947 | orchestrator | 2026-04-05 06:18:32.622965 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 06:18:32.622978 | orchestrator | Sunday 05 April 2026 06:18:31 +0000 (0:00:01.096) 1:05:07.936 ********** 2026-04-05 06:18:32.622990 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:32.623002 | orchestrator | 2026-04-05 06:18:32.623013 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 06:18:32.623024 | orchestrator | Sunday 05 April 2026 06:18:32 +0000 (0:00:01.155) 1:05:09.091 ********** 2026-04-05 06:18:32.623038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:32.623074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}})  2026-04-05 06:18:32.623089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:18:32.623102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}})  2026-04-05 06:18:32.623115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:32.623127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:32.623182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 06:18:32.623196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:32.623208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:18:32.623225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:32.623236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}})  2026-04-05 06:18:32.623248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}})  2026-04-05 06:18:32.623260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:32.623301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:18:33.964031 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:33.964156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:18:33.964175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:18:33.964191 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:33.964204 | orchestrator | 2026-04-05 06:18:33.964216 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 06:18:33.964300 | orchestrator | Sunday 05 April 2026 06:18:33 +0000 (0:00:01.373) 1:05:10.465 ********** 2026-04-05 06:18:33.964316 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964329 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d', 'dm-uuid-LVM-QaK3W844rt42GOFTlUj8FdGZI9r9RWxV9UXNqdgVjZjUGXMoJTsTPNuZxX2bMqRl'], 'uuids': ['4dbb6111-6798-410c-bf3d-466dc8e67441'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964358 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c', 'scsi-SQEMU_QEMU_HARDDISK_ff5ba5b2-ecfa-45a1-89e6-23476a027e2c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff5ba5b2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964391 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-I5EpQG-EleH-fR0J-OPmk-msG1-LWT3-tZDxKk', 'scsi-0QEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55', 'scsi-SQEMU_QEMU_HARDDISK_bb381a94-6fda-41aa-85e5-5a8e9e212f55'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964406 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964429 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964442 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964454 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:33.964479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9', 'dm-uuid-CRYPT-LUKS2-a863ce4c094f4d00878df4db794fb62c-ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430250 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430393 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--71b5f103--fb0e--5af6--8506--51783512c8b9-osd--block--71b5f103--fb0e--5af6--8506--51783512c8b9', 'dm-uuid-LVM-vuDwhUMJufQ1fExQZhy9OFWnq6AQ1VzHZDRAGlO8uNhpdaMYX7StPGvlNYck8zF9'], 'uuids': ['a863ce4c-094f-4d00-878d-f4db794fb62c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bb381a94', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZDRAGl-O8uN-hpda-MYX7-StPG-vlNY-ck8zF9']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-ftoKpe-7IPr-ppK4-Gzb3-Ti9i-DfkB-uPTgK2', 'scsi-0QEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c', 'scsi-SQEMU_QEMU_HARDDISK_e411545b-3ce6-4571-8392-eb6cf6edb95c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e411545b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8259097b--349e--523a--9f4d--33b374f7dc5d-osd--block--8259097b--349e--523a--9f4d--33b374f7dc5d']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fe672449', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1', 'scsi-SQEMU_QEMU_HARDDISK_fe672449-71e9-4e4c-878a-a876f42bef0a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430579 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl', 'dm-uuid-CRYPT-LUKS2-4dbb61116798410cbf3d466dc8e67441-9UXNqd-gVjZ-jUGX-MoJT-sTPN-uZxX-2bMqRl'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:18:39.430605 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:18:39.430618 | orchestrator | 2026-04-05 06:18:39.430630 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 06:18:39.430642 | orchestrator | Sunday 05 April 2026 06:18:35 +0000 (0:00:01.487) 1:05:11.952 ********** 2026-04-05 06:18:39.430654 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:39.430665 | orchestrator | 2026-04-05 06:18:39.430677 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 06:18:39.430688 | orchestrator | Sunday 05 April 2026 06:18:36 +0000 (0:00:01.621) 1:05:13.573 ********** 2026-04-05 06:18:39.430698 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:39.430709 | orchestrator | 2026-04-05 06:18:39.430751 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:18:39.430775 | orchestrator | Sunday 05 April 2026 06:18:37 +0000 (0:00:01.134) 1:05:14.707 ********** 2026-04-05 06:18:39.430797 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:18:39.430818 | orchestrator | 2026-04-05 06:18:39.430835 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:18:39.430857 | orchestrator | Sunday 05 April 2026 06:18:39 +0000 (0:00:01.436) 1:05:16.144 ********** 2026-04-05 06:19:22.221500 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.221616 | orchestrator | 2026-04-05 06:19:22.221633 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:19:22.221718 | orchestrator | Sunday 05 April 2026 06:18:40 +0000 (0:00:01.210) 1:05:17.354 ********** 2026-04-05 06:19:22.221732 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.221743 | orchestrator | 2026-04-05 06:19:22.221781 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:19:22.221793 | orchestrator | Sunday 05 April 2026 06:18:41 +0000 (0:00:01.333) 1:05:18.688 ********** 2026-04-05 06:19:22.221803 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.221814 | orchestrator | 2026-04-05 06:19:22.221825 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 06:19:22.221835 | orchestrator | Sunday 05 April 2026 06:18:43 +0000 (0:00:01.205) 1:05:19.893 ********** 2026-04-05 06:19:22.221847 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-05 06:19:22.221859 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-05 06:19:22.221869 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-05 06:19:22.221880 | orchestrator | 2026-04-05 06:19:22.221891 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 06:19:22.221902 | orchestrator | Sunday 05 April 2026 06:18:44 +0000 (0:00:01.715) 1:05:21.609 ********** 2026-04-05 06:19:22.221913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-05 06:19:22.221924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-05 06:19:22.221934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-05 06:19:22.221945 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.221955 | orchestrator | 2026-04-05 06:19:22.221966 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 06:19:22.221977 | orchestrator | Sunday 05 April 2026 06:18:46 +0000 (0:00:01.181) 1:05:22.791 ********** 2026-04-05 06:19:22.221988 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-05 06:19:22.222000 | orchestrator | 2026-04-05 06:19:22.222012 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:19:22.222088 | orchestrator | Sunday 05 April 2026 06:18:47 +0000 (0:00:01.131) 1:05:23.923 ********** 2026-04-05 06:19:22.222101 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.222114 | orchestrator | 2026-04-05 06:19:22.222127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:19:22.222150 | orchestrator | Sunday 05 April 2026 06:18:48 +0000 (0:00:01.126) 1:05:25.050 ********** 2026-04-05 06:19:22.222162 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.222174 | orchestrator | 2026-04-05 06:19:22.222187 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:19:22.222199 | orchestrator | Sunday 05 April 2026 06:18:49 +0000 (0:00:01.125) 1:05:26.175 ********** 2026-04-05 06:19:22.222212 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.222224 | orchestrator | 2026-04-05 06:19:22.222236 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:19:22.222249 | orchestrator | Sunday 05 April 2026 06:18:50 +0000 (0:00:01.164) 1:05:27.340 ********** 2026-04-05 06:19:22.222261 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:19:22.222273 | orchestrator | 2026-04-05 06:19:22.222286 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:19:22.222297 | orchestrator | Sunday 05 April 2026 06:18:51 +0000 (0:00:01.372) 1:05:28.712 ********** 2026-04-05 06:19:22.222309 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:19:22.222322 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:19:22.222334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:19:22.222347 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.222359 | orchestrator | 2026-04-05 06:19:22.222372 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:19:22.222384 | orchestrator | Sunday 05 April 2026 06:18:53 +0000 (0:00:01.416) 1:05:30.129 ********** 2026-04-05 06:19:22.222396 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:19:22.222409 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:19:22.222430 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:19:22.222441 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.222452 | orchestrator | 2026-04-05 06:19:22.222463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:19:22.222474 | orchestrator | Sunday 05 April 2026 06:18:54 +0000 (0:00:01.493) 1:05:31.622 ********** 2026-04-05 06:19:22.222484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:19:22.222495 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:19:22.222506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:19:22.222517 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.222527 | orchestrator | 2026-04-05 06:19:22.222538 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:19:22.222549 | orchestrator | Sunday 05 April 2026 06:18:56 +0000 (0:00:01.566) 1:05:33.189 ********** 2026-04-05 06:19:22.222560 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:19:22.222571 | orchestrator | 2026-04-05 06:19:22.222582 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:19:22.222607 | orchestrator | Sunday 05 April 2026 06:18:57 +0000 (0:00:01.177) 1:05:34.367 ********** 2026-04-05 06:19:22.222618 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 06:19:22.222628 | orchestrator | 2026-04-05 06:19:22.222663 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 06:19:22.222684 | orchestrator | Sunday 05 April 2026 06:18:59 +0000 (0:00:01.381) 1:05:35.749 ********** 2026-04-05 06:19:22.222713 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:19:22.222725 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:19:22.222736 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:19:22.222746 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:19:22.222757 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-05 06:19:22.222768 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:19:22.222778 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:19:22.222789 | orchestrator | 2026-04-05 06:19:22.222800 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 06:19:22.222811 | orchestrator | Sunday 05 April 2026 06:19:00 +0000 (0:00:01.882) 1:05:37.631 ********** 2026-04-05 06:19:22.222821 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:19:22.222832 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:19:22.222843 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:19:22.222853 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:19:22.222864 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-05 06:19:22.222875 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-05 06:19:22.222885 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:19:22.222896 | orchestrator | 2026-04-05 06:19:22.222906 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-05 06:19:22.222917 | orchestrator | Sunday 05 April 2026 06:19:03 +0000 (0:00:02.455) 1:05:40.087 ********** 2026-04-05 06:19:22.222928 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:19:22.222938 | orchestrator | 2026-04-05 06:19:22.222949 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-05 06:19:22.222959 | orchestrator | Sunday 05 April 2026 06:19:05 +0000 (0:00:01.869) 1:05:41.956 ********** 2026-04-05 06:19:22.222979 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:19:22.222990 | orchestrator | 2026-04-05 06:19:22.223001 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-05 06:19:22.223011 | orchestrator | Sunday 05 April 2026 06:19:08 +0000 (0:00:02.795) 1:05:44.751 ********** 2026-04-05 06:19:22.223022 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:19:22.223033 | orchestrator | 2026-04-05 06:19:22.223043 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 06:19:22.223054 | orchestrator | Sunday 05 April 2026 06:19:10 +0000 (0:00:01.999) 1:05:46.751 ********** 2026-04-05 06:19:22.223064 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-05 06:19:22.223075 | orchestrator | 2026-04-05 06:19:22.223086 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 06:19:22.223096 | orchestrator | Sunday 05 April 2026 06:19:11 +0000 (0:00:01.462) 1:05:48.214 ********** 2026-04-05 06:19:22.223107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-05 06:19:22.223117 | orchestrator | 2026-04-05 06:19:22.223128 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 06:19:22.223139 | orchestrator | Sunday 05 April 2026 06:19:12 +0000 (0:00:01.135) 1:05:49.349 ********** 2026-04-05 06:19:22.223149 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.223160 | orchestrator | 2026-04-05 06:19:22.223171 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 06:19:22.223181 | orchestrator | Sunday 05 April 2026 06:19:13 +0000 (0:00:01.166) 1:05:50.516 ********** 2026-04-05 06:19:22.223192 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:19:22.223203 | orchestrator | 2026-04-05 06:19:22.223213 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 06:19:22.223224 | orchestrator | Sunday 05 April 2026 06:19:15 +0000 (0:00:01.547) 1:05:52.064 ********** 2026-04-05 06:19:22.223234 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:19:22.223245 | orchestrator | 2026-04-05 06:19:22.223256 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 06:19:22.223266 | orchestrator | Sunday 05 April 2026 06:19:16 +0000 (0:00:01.569) 1:05:53.634 ********** 2026-04-05 06:19:22.223277 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:19:22.223287 | orchestrator | 2026-04-05 06:19:22.223298 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 06:19:22.223309 | orchestrator | Sunday 05 April 2026 06:19:18 +0000 (0:00:01.637) 1:05:55.272 ********** 2026-04-05 06:19:22.223320 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.223330 | orchestrator | 2026-04-05 06:19:22.223341 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 06:19:22.223351 | orchestrator | Sunday 05 April 2026 06:19:19 +0000 (0:00:01.219) 1:05:56.491 ********** 2026-04-05 06:19:22.223367 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.223378 | orchestrator | 2026-04-05 06:19:22.223389 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 06:19:22.223400 | orchestrator | Sunday 05 April 2026 06:19:21 +0000 (0:00:01.249) 1:05:57.741 ********** 2026-04-05 06:19:22.223410 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:19:22.223421 | orchestrator | 2026-04-05 06:19:22.223432 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 06:19:22.223449 | orchestrator | Sunday 05 April 2026 06:19:22 +0000 (0:00:01.182) 1:05:58.923 ********** 2026-04-05 06:20:01.870634 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.870751 | orchestrator | 2026-04-05 06:20:01.870768 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 06:20:01.870781 | orchestrator | Sunday 05 April 2026 06:19:23 +0000 (0:00:01.599) 1:06:00.522 ********** 2026-04-05 06:20:01.870817 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.870829 | orchestrator | 2026-04-05 06:20:01.870840 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 06:20:01.870851 | orchestrator | Sunday 05 April 2026 06:19:25 +0000 (0:00:01.585) 1:06:02.108 ********** 2026-04-05 06:20:01.870862 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.870873 | orchestrator | 2026-04-05 06:20:01.870884 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 06:20:01.870896 | orchestrator | Sunday 05 April 2026 06:19:26 +0000 (0:00:00.764) 1:06:02.873 ********** 2026-04-05 06:20:01.870907 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.870917 | orchestrator | 2026-04-05 06:20:01.870929 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 06:20:01.870940 | orchestrator | Sunday 05 April 2026 06:19:27 +0000 (0:00:00.952) 1:06:03.825 ********** 2026-04-05 06:20:01.870950 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.870961 | orchestrator | 2026-04-05 06:20:01.870972 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 06:20:01.870983 | orchestrator | Sunday 05 April 2026 06:19:27 +0000 (0:00:00.821) 1:06:04.646 ********** 2026-04-05 06:20:01.870994 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.871005 | orchestrator | 2026-04-05 06:20:01.871016 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 06:20:01.871027 | orchestrator | Sunday 05 April 2026 06:19:28 +0000 (0:00:00.801) 1:06:05.448 ********** 2026-04-05 06:20:01.871038 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.871049 | orchestrator | 2026-04-05 06:20:01.871060 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 06:20:01.871071 | orchestrator | Sunday 05 April 2026 06:19:29 +0000 (0:00:00.791) 1:06:06.240 ********** 2026-04-05 06:20:01.871082 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871092 | orchestrator | 2026-04-05 06:20:01.871103 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 06:20:01.871114 | orchestrator | Sunday 05 April 2026 06:19:30 +0000 (0:00:00.788) 1:06:07.029 ********** 2026-04-05 06:20:01.871128 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871141 | orchestrator | 2026-04-05 06:20:01.871154 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 06:20:01.871167 | orchestrator | Sunday 05 April 2026 06:19:31 +0000 (0:00:00.786) 1:06:07.815 ********** 2026-04-05 06:20:01.871180 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871193 | orchestrator | 2026-04-05 06:20:01.871204 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 06:20:01.871215 | orchestrator | Sunday 05 April 2026 06:19:31 +0000 (0:00:00.789) 1:06:08.604 ********** 2026-04-05 06:20:01.871226 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.871237 | orchestrator | 2026-04-05 06:20:01.871247 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 06:20:01.871258 | orchestrator | Sunday 05 April 2026 06:19:32 +0000 (0:00:00.792) 1:06:09.396 ********** 2026-04-05 06:20:01.871269 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.871280 | orchestrator | 2026-04-05 06:20:01.871291 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 06:20:01.871302 | orchestrator | Sunday 05 April 2026 06:19:33 +0000 (0:00:00.798) 1:06:10.195 ********** 2026-04-05 06:20:01.871313 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871323 | orchestrator | 2026-04-05 06:20:01.871334 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 06:20:01.871345 | orchestrator | Sunday 05 April 2026 06:19:34 +0000 (0:00:00.803) 1:06:10.999 ********** 2026-04-05 06:20:01.871356 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871367 | orchestrator | 2026-04-05 06:20:01.871378 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 06:20:01.871388 | orchestrator | Sunday 05 April 2026 06:19:35 +0000 (0:00:00.801) 1:06:11.801 ********** 2026-04-05 06:20:01.871408 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871419 | orchestrator | 2026-04-05 06:20:01.871430 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 06:20:01.871441 | orchestrator | Sunday 05 April 2026 06:19:35 +0000 (0:00:00.826) 1:06:12.628 ********** 2026-04-05 06:20:01.871452 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871463 | orchestrator | 2026-04-05 06:20:01.871474 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 06:20:01.871485 | orchestrator | Sunday 05 April 2026 06:19:36 +0000 (0:00:00.800) 1:06:13.428 ********** 2026-04-05 06:20:01.871495 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871506 | orchestrator | 2026-04-05 06:20:01.871517 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 06:20:01.871528 | orchestrator | Sunday 05 April 2026 06:19:37 +0000 (0:00:01.017) 1:06:14.446 ********** 2026-04-05 06:20:01.871539 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871550 | orchestrator | 2026-04-05 06:20:01.871561 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 06:20:01.871592 | orchestrator | Sunday 05 April 2026 06:19:38 +0000 (0:00:00.766) 1:06:15.213 ********** 2026-04-05 06:20:01.871604 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871614 | orchestrator | 2026-04-05 06:20:01.871641 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 06:20:01.871652 | orchestrator | Sunday 05 April 2026 06:19:39 +0000 (0:00:00.853) 1:06:16.067 ********** 2026-04-05 06:20:01.871663 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871674 | orchestrator | 2026-04-05 06:20:01.871685 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 06:20:01.871696 | orchestrator | Sunday 05 April 2026 06:19:40 +0000 (0:00:00.870) 1:06:16.937 ********** 2026-04-05 06:20:01.871707 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871718 | orchestrator | 2026-04-05 06:20:01.871746 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 06:20:01.871757 | orchestrator | Sunday 05 April 2026 06:19:41 +0000 (0:00:00.812) 1:06:17.750 ********** 2026-04-05 06:20:01.871768 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871779 | orchestrator | 2026-04-05 06:20:01.871790 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 06:20:01.871800 | orchestrator | Sunday 05 April 2026 06:19:41 +0000 (0:00:00.784) 1:06:18.534 ********** 2026-04-05 06:20:01.871811 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871822 | orchestrator | 2026-04-05 06:20:01.871833 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 06:20:01.871844 | orchestrator | Sunday 05 April 2026 06:19:42 +0000 (0:00:00.750) 1:06:19.284 ********** 2026-04-05 06:20:01.871854 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.871865 | orchestrator | 2026-04-05 06:20:01.871876 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 06:20:01.871887 | orchestrator | Sunday 05 April 2026 06:19:43 +0000 (0:00:00.830) 1:06:20.115 ********** 2026-04-05 06:20:01.871898 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.871908 | orchestrator | 2026-04-05 06:20:01.871919 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 06:20:01.871930 | orchestrator | Sunday 05 April 2026 06:19:44 +0000 (0:00:01.578) 1:06:21.693 ********** 2026-04-05 06:20:01.871941 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.871952 | orchestrator | 2026-04-05 06:20:01.871962 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 06:20:01.871973 | orchestrator | Sunday 05 April 2026 06:19:46 +0000 (0:00:01.813) 1:06:23.506 ********** 2026-04-05 06:20:01.871984 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-05 06:20:01.871996 | orchestrator | 2026-04-05 06:20:01.872007 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 06:20:01.872018 | orchestrator | Sunday 05 April 2026 06:19:47 +0000 (0:00:01.130) 1:06:24.637 ********** 2026-04-05 06:20:01.872036 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872047 | orchestrator | 2026-04-05 06:20:01.872058 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 06:20:01.872069 | orchestrator | Sunday 05 April 2026 06:19:49 +0000 (0:00:01.343) 1:06:25.981 ********** 2026-04-05 06:20:01.872080 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872091 | orchestrator | 2026-04-05 06:20:01.872101 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 06:20:01.872112 | orchestrator | Sunday 05 April 2026 06:19:50 +0000 (0:00:01.166) 1:06:27.147 ********** 2026-04-05 06:20:01.872123 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 06:20:01.872147 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 06:20:01.872169 | orchestrator | 2026-04-05 06:20:01.872180 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 06:20:01.872191 | orchestrator | Sunday 05 April 2026 06:19:52 +0000 (0:00:01.833) 1:06:28.981 ********** 2026-04-05 06:20:01.872202 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.872212 | orchestrator | 2026-04-05 06:20:01.872223 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 06:20:01.872234 | orchestrator | Sunday 05 April 2026 06:19:53 +0000 (0:00:01.487) 1:06:30.469 ********** 2026-04-05 06:20:01.872245 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872256 | orchestrator | 2026-04-05 06:20:01.872266 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 06:20:01.872277 | orchestrator | Sunday 05 April 2026 06:19:54 +0000 (0:00:01.146) 1:06:31.615 ********** 2026-04-05 06:20:01.872288 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872299 | orchestrator | 2026-04-05 06:20:01.872310 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 06:20:01.872320 | orchestrator | Sunday 05 April 2026 06:19:55 +0000 (0:00:00.852) 1:06:32.468 ********** 2026-04-05 06:20:01.872331 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872342 | orchestrator | 2026-04-05 06:20:01.872353 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 06:20:01.872364 | orchestrator | Sunday 05 April 2026 06:19:56 +0000 (0:00:00.810) 1:06:33.279 ********** 2026-04-05 06:20:01.872374 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-05 06:20:01.872385 | orchestrator | 2026-04-05 06:20:01.872396 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 06:20:01.872407 | orchestrator | Sunday 05 April 2026 06:19:57 +0000 (0:00:01.102) 1:06:34.381 ********** 2026-04-05 06:20:01.872418 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:01.872429 | orchestrator | 2026-04-05 06:20:01.872440 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 06:20:01.872450 | orchestrator | Sunday 05 April 2026 06:19:59 +0000 (0:00:01.715) 1:06:36.097 ********** 2026-04-05 06:20:01.872461 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 06:20:01.872472 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 06:20:01.872483 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 06:20:01.872494 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872505 | orchestrator | 2026-04-05 06:20:01.872520 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 06:20:01.872531 | orchestrator | Sunday 05 April 2026 06:20:00 +0000 (0:00:01.182) 1:06:37.279 ********** 2026-04-05 06:20:01.872542 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872553 | orchestrator | 2026-04-05 06:20:01.872581 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 06:20:01.872592 | orchestrator | Sunday 05 April 2026 06:20:01 +0000 (0:00:01.109) 1:06:38.388 ********** 2026-04-05 06:20:01.872611 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:01.872622 | orchestrator | 2026-04-05 06:20:01.872639 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 06:20:45.611399 | orchestrator | Sunday 05 April 2026 06:20:02 +0000 (0:00:01.199) 1:06:39.588 ********** 2026-04-05 06:20:45.611564 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.611582 | orchestrator | 2026-04-05 06:20:45.611595 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 06:20:45.611607 | orchestrator | Sunday 05 April 2026 06:20:04 +0000 (0:00:01.295) 1:06:40.883 ********** 2026-04-05 06:20:45.611619 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.611630 | orchestrator | 2026-04-05 06:20:45.611641 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 06:20:45.611652 | orchestrator | Sunday 05 April 2026 06:20:05 +0000 (0:00:01.178) 1:06:42.062 ********** 2026-04-05 06:20:45.611663 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.611674 | orchestrator | 2026-04-05 06:20:45.611685 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 06:20:45.611696 | orchestrator | Sunday 05 April 2026 06:20:06 +0000 (0:00:00.784) 1:06:42.846 ********** 2026-04-05 06:20:45.611707 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:45.611719 | orchestrator | 2026-04-05 06:20:45.611730 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 06:20:45.611742 | orchestrator | Sunday 05 April 2026 06:20:08 +0000 (0:00:02.120) 1:06:44.967 ********** 2026-04-05 06:20:45.611753 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:45.611764 | orchestrator | 2026-04-05 06:20:45.611775 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 06:20:45.611785 | orchestrator | Sunday 05 April 2026 06:20:09 +0000 (0:00:00.768) 1:06:45.736 ********** 2026-04-05 06:20:45.611796 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-05 06:20:45.611807 | orchestrator | 2026-04-05 06:20:45.611818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 06:20:45.611829 | orchestrator | Sunday 05 April 2026 06:20:10 +0000 (0:00:01.125) 1:06:46.861 ********** 2026-04-05 06:20:45.611840 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.611851 | orchestrator | 2026-04-05 06:20:45.611862 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 06:20:45.611872 | orchestrator | Sunday 05 April 2026 06:20:11 +0000 (0:00:01.202) 1:06:48.064 ********** 2026-04-05 06:20:45.611883 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.611894 | orchestrator | 2026-04-05 06:20:45.611905 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 06:20:45.611915 | orchestrator | Sunday 05 April 2026 06:20:12 +0000 (0:00:01.162) 1:06:49.226 ********** 2026-04-05 06:20:45.611927 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.611939 | orchestrator | 2026-04-05 06:20:45.611952 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 06:20:45.611965 | orchestrator | Sunday 05 April 2026 06:20:13 +0000 (0:00:01.203) 1:06:50.429 ********** 2026-04-05 06:20:45.611978 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.611991 | orchestrator | 2026-04-05 06:20:45.612003 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 06:20:45.612016 | orchestrator | Sunday 05 April 2026 06:20:14 +0000 (0:00:01.167) 1:06:51.597 ********** 2026-04-05 06:20:45.612028 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612041 | orchestrator | 2026-04-05 06:20:45.612054 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 06:20:45.612067 | orchestrator | Sunday 05 April 2026 06:20:16 +0000 (0:00:01.128) 1:06:52.726 ********** 2026-04-05 06:20:45.612079 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612092 | orchestrator | 2026-04-05 06:20:45.612105 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 06:20:45.612143 | orchestrator | Sunday 05 April 2026 06:20:17 +0000 (0:00:01.164) 1:06:53.891 ********** 2026-04-05 06:20:45.612156 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612169 | orchestrator | 2026-04-05 06:20:45.612181 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 06:20:45.612194 | orchestrator | Sunday 05 April 2026 06:20:18 +0000 (0:00:01.326) 1:06:55.217 ********** 2026-04-05 06:20:45.612206 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612219 | orchestrator | 2026-04-05 06:20:45.612232 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 06:20:45.612244 | orchestrator | Sunday 05 April 2026 06:20:19 +0000 (0:00:01.167) 1:06:56.384 ********** 2026-04-05 06:20:45.612257 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:20:45.612270 | orchestrator | 2026-04-05 06:20:45.612283 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 06:20:45.612295 | orchestrator | Sunday 05 April 2026 06:20:20 +0000 (0:00:00.839) 1:06:57.224 ********** 2026-04-05 06:20:45.612306 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-05 06:20:45.612318 | orchestrator | 2026-04-05 06:20:45.612328 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 06:20:45.612339 | orchestrator | Sunday 05 April 2026 06:20:21 +0000 (0:00:01.202) 1:06:58.427 ********** 2026-04-05 06:20:45.612351 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-05 06:20:45.612362 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-05 06:20:45.612373 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-05 06:20:45.612384 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-05 06:20:45.612410 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-05 06:20:45.612421 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-05 06:20:45.612432 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-05 06:20:45.612442 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-05 06:20:45.612453 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 06:20:45.612464 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 06:20:45.612475 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 06:20:45.612522 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 06:20:45.612535 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 06:20:45.612546 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 06:20:45.612557 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-05 06:20:45.612568 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-05 06:20:45.612578 | orchestrator | 2026-04-05 06:20:45.612589 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 06:20:45.612600 | orchestrator | Sunday 05 April 2026 06:20:27 +0000 (0:00:06.093) 1:07:04.520 ********** 2026-04-05 06:20:45.612611 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-05 06:20:45.612622 | orchestrator | 2026-04-05 06:20:45.612633 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 06:20:45.612644 | orchestrator | Sunday 05 April 2026 06:20:28 +0000 (0:00:01.119) 1:07:05.639 ********** 2026-04-05 06:20:45.612654 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:20:45.612667 | orchestrator | 2026-04-05 06:20:45.612678 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 06:20:45.612688 | orchestrator | Sunday 05 April 2026 06:20:30 +0000 (0:00:01.599) 1:07:07.239 ********** 2026-04-05 06:20:45.612699 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:20:45.612719 | orchestrator | 2026-04-05 06:20:45.612730 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 06:20:45.612740 | orchestrator | Sunday 05 April 2026 06:20:32 +0000 (0:00:01.641) 1:07:08.881 ********** 2026-04-05 06:20:45.612751 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612762 | orchestrator | 2026-04-05 06:20:45.612773 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 06:20:45.612783 | orchestrator | Sunday 05 April 2026 06:20:32 +0000 (0:00:00.789) 1:07:09.670 ********** 2026-04-05 06:20:45.612794 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612805 | orchestrator | 2026-04-05 06:20:45.612816 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 06:20:45.612827 | orchestrator | Sunday 05 April 2026 06:20:33 +0000 (0:00:00.896) 1:07:10.567 ********** 2026-04-05 06:20:45.612837 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612848 | orchestrator | 2026-04-05 06:20:45.612859 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 06:20:45.612870 | orchestrator | Sunday 05 April 2026 06:20:34 +0000 (0:00:00.961) 1:07:11.529 ********** 2026-04-05 06:20:45.612880 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612891 | orchestrator | 2026-04-05 06:20:45.612902 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 06:20:45.612913 | orchestrator | Sunday 05 April 2026 06:20:35 +0000 (0:00:00.774) 1:07:12.304 ********** 2026-04-05 06:20:45.612923 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612934 | orchestrator | 2026-04-05 06:20:45.612945 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 06:20:45.612956 | orchestrator | Sunday 05 April 2026 06:20:36 +0000 (0:00:00.790) 1:07:13.094 ********** 2026-04-05 06:20:45.612966 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.612977 | orchestrator | 2026-04-05 06:20:45.612988 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 06:20:45.612999 | orchestrator | Sunday 05 April 2026 06:20:37 +0000 (0:00:00.771) 1:07:13.866 ********** 2026-04-05 06:20:45.613010 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.613021 | orchestrator | 2026-04-05 06:20:45.613031 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 06:20:45.613042 | orchestrator | Sunday 05 April 2026 06:20:37 +0000 (0:00:00.800) 1:07:14.666 ********** 2026-04-05 06:20:45.613053 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.613064 | orchestrator | 2026-04-05 06:20:45.613075 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 06:20:45.613085 | orchestrator | Sunday 05 April 2026 06:20:38 +0000 (0:00:00.780) 1:07:15.447 ********** 2026-04-05 06:20:45.613096 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.613107 | orchestrator | 2026-04-05 06:20:45.613118 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 06:20:45.613129 | orchestrator | Sunday 05 April 2026 06:20:39 +0000 (0:00:00.807) 1:07:16.254 ********** 2026-04-05 06:20:45.613139 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.613150 | orchestrator | 2026-04-05 06:20:45.613161 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 06:20:45.613172 | orchestrator | Sunday 05 April 2026 06:20:40 +0000 (0:00:00.799) 1:07:17.054 ********** 2026-04-05 06:20:45.613182 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:20:45.613193 | orchestrator | 2026-04-05 06:20:45.613204 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 06:20:45.613215 | orchestrator | Sunday 05 April 2026 06:20:41 +0000 (0:00:00.814) 1:07:17.868 ********** 2026-04-05 06:20:45.613231 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-05 06:20:45.613243 | orchestrator | 2026-04-05 06:20:45.613253 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 06:20:45.613270 | orchestrator | Sunday 05 April 2026 06:20:45 +0000 (0:00:04.246) 1:07:22.115 ********** 2026-04-05 06:20:45.613281 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:20:45.613292 | orchestrator | 2026-04-05 06:20:45.613311 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 06:21:27.257407 | orchestrator | Sunday 05 April 2026 06:20:46 +0000 (0:00:00.832) 1:07:22.948 ********** 2026-04-05 06:21:27.257639 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-05 06:21:27.257674 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-05 06:21:27.257697 | orchestrator | 2026-04-05 06:21:27.257718 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 06:21:27.257738 | orchestrator | Sunday 05 April 2026 06:20:50 +0000 (0:00:04.643) 1:07:27.592 ********** 2026-04-05 06:21:27.257756 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.257777 | orchestrator | 2026-04-05 06:21:27.257796 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 06:21:27.257815 | orchestrator | Sunday 05 April 2026 06:20:51 +0000 (0:00:00.794) 1:07:28.386 ********** 2026-04-05 06:21:27.257832 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.257851 | orchestrator | 2026-04-05 06:21:27.257870 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:21:27.257890 | orchestrator | Sunday 05 April 2026 06:20:52 +0000 (0:00:00.808) 1:07:29.195 ********** 2026-04-05 06:21:27.257910 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.257929 | orchestrator | 2026-04-05 06:21:27.257950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:21:27.257969 | orchestrator | Sunday 05 April 2026 06:20:53 +0000 (0:00:01.005) 1:07:30.200 ********** 2026-04-05 06:21:27.257988 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.258012 | orchestrator | 2026-04-05 06:21:27.258115 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:21:27.258132 | orchestrator | Sunday 05 April 2026 06:20:54 +0000 (0:00:00.812) 1:07:31.014 ********** 2026-04-05 06:21:27.258151 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.258169 | orchestrator | 2026-04-05 06:21:27.258189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:21:27.258207 | orchestrator | Sunday 05 April 2026 06:20:55 +0000 (0:00:00.820) 1:07:31.834 ********** 2026-04-05 06:21:27.258225 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:21:27.258245 | orchestrator | 2026-04-05 06:21:27.258264 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:21:27.258282 | orchestrator | Sunday 05 April 2026 06:20:56 +0000 (0:00:00.926) 1:07:32.761 ********** 2026-04-05 06:21:27.258295 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:21:27.258306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:21:27.258317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:21:27.258328 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.258339 | orchestrator | 2026-04-05 06:21:27.258350 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:21:27.258361 | orchestrator | Sunday 05 April 2026 06:20:57 +0000 (0:00:01.093) 1:07:33.855 ********** 2026-04-05 06:21:27.258371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:21:27.258444 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:21:27.258457 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:21:27.258467 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.258476 | orchestrator | 2026-04-05 06:21:27.258486 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:21:27.258496 | orchestrator | Sunday 05 April 2026 06:20:58 +0000 (0:00:01.112) 1:07:34.968 ********** 2026-04-05 06:21:27.258505 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-05 06:21:27.258515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-05 06:21:27.258524 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-05 06:21:27.258534 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.258543 | orchestrator | 2026-04-05 06:21:27.258553 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:21:27.258563 | orchestrator | Sunday 05 April 2026 06:20:59 +0000 (0:00:01.170) 1:07:36.139 ********** 2026-04-05 06:21:27.258572 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:21:27.258581 | orchestrator | 2026-04-05 06:21:27.258591 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:21:27.258601 | orchestrator | Sunday 05 April 2026 06:21:00 +0000 (0:00:00.797) 1:07:36.936 ********** 2026-04-05 06:21:27.258610 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-05 06:21:27.258620 | orchestrator | 2026-04-05 06:21:27.258644 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 06:21:27.258654 | orchestrator | Sunday 05 April 2026 06:21:01 +0000 (0:00:01.027) 1:07:37.964 ********** 2026-04-05 06:21:27.258664 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:21:27.258674 | orchestrator | 2026-04-05 06:21:27.258683 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-05 06:21:27.258692 | orchestrator | Sunday 05 April 2026 06:21:02 +0000 (0:00:01.342) 1:07:39.306 ********** 2026-04-05 06:21:27.258702 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-04-05 06:21:27.258711 | orchestrator | 2026-04-05 06:21:27.258739 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 06:21:27.258750 | orchestrator | Sunday 05 April 2026 06:21:03 +0000 (0:00:01.145) 1:07:40.452 ********** 2026-04-05 06:21:27.258759 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:21:27.258769 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 06:21:27.258778 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:21:27.258788 | orchestrator | 2026-04-05 06:21:27.258797 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:21:27.258807 | orchestrator | Sunday 05 April 2026 06:21:07 +0000 (0:00:03.271) 1:07:43.724 ********** 2026-04-05 06:21:27.258816 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 06:21:27.258826 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-05 06:21:27.258835 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:21:27.258844 | orchestrator | 2026-04-05 06:21:27.258854 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-05 06:21:27.258863 | orchestrator | Sunday 05 April 2026 06:21:08 +0000 (0:00:01.933) 1:07:45.657 ********** 2026-04-05 06:21:27.258873 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.258882 | orchestrator | 2026-04-05 06:21:27.258891 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-05 06:21:27.258901 | orchestrator | Sunday 05 April 2026 06:21:09 +0000 (0:00:00.886) 1:07:46.544 ********** 2026-04-05 06:21:27.258910 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-04-05 06:21:27.258921 | orchestrator | 2026-04-05 06:21:27.258930 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-05 06:21:27.258940 | orchestrator | Sunday 05 April 2026 06:21:11 +0000 (0:00:01.178) 1:07:47.723 ********** 2026-04-05 06:21:27.258959 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:21:27.258971 | orchestrator | 2026-04-05 06:21:27.258980 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-05 06:21:27.258990 | orchestrator | Sunday 05 April 2026 06:21:12 +0000 (0:00:01.641) 1:07:49.364 ********** 2026-04-05 06:21:27.258999 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:21:27.259008 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 06:21:27.259018 | orchestrator | 2026-04-05 06:21:27.259028 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 06:21:27.259037 | orchestrator | Sunday 05 April 2026 06:21:17 +0000 (0:00:05.323) 1:07:54.688 ********** 2026-04-05 06:21:27.259046 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:21:27.259056 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:21:27.259065 | orchestrator | 2026-04-05 06:21:27.259075 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:21:27.259084 | orchestrator | Sunday 05 April 2026 06:21:21 +0000 (0:00:03.253) 1:07:57.942 ********** 2026-04-05 06:21:27.259093 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 06:21:27.259103 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:21:27.259112 | orchestrator | 2026-04-05 06:21:27.259122 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-05 06:21:27.259131 | orchestrator | Sunday 05 April 2026 06:21:22 +0000 (0:00:01.666) 1:07:59.609 ********** 2026-04-05 06:21:27.259140 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-04-05 06:21:27.259150 | orchestrator | 2026-04-05 06:21:27.259159 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-05 06:21:27.259169 | orchestrator | Sunday 05 April 2026 06:21:24 +0000 (0:00:01.267) 1:08:00.876 ********** 2026-04-05 06:21:27.259178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259227 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:21:27.259236 | orchestrator | 2026-04-05 06:21:27.259246 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-05 06:21:27.259255 | orchestrator | Sunday 05 April 2026 06:21:26 +0000 (0:00:02.158) 1:08:03.035 ********** 2026-04-05 06:21:27.259269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:21:27.259305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:22:34.526143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:22:34.526241 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:22:34.526253 | orchestrator | 2026-04-05 06:22:34.526263 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-05 06:22:34.526273 | orchestrator | Sunday 05 April 2026 06:21:28 +0000 (0:00:02.264) 1:08:05.299 ********** 2026-04-05 06:22:34.526281 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:22:34.526291 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:22:34.526341 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:22:34.526352 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:22:34.526362 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:22:34.526370 | orchestrator | 2026-04-05 06:22:34.526378 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-05 06:22:34.526386 | orchestrator | Sunday 05 April 2026 06:21:59 +0000 (0:00:31.199) 1:08:36.499 ********** 2026-04-05 06:22:34.526394 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:22:34.526402 | orchestrator | 2026-04-05 06:22:34.526410 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-05 06:22:34.526418 | orchestrator | Sunday 05 April 2026 06:22:00 +0000 (0:00:00.824) 1:08:37.323 ********** 2026-04-05 06:22:34.526426 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:22:34.526434 | orchestrator | 2026-04-05 06:22:34.526442 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-05 06:22:34.526450 | orchestrator | Sunday 05 April 2026 06:22:01 +0000 (0:00:00.932) 1:08:38.256 ********** 2026-04-05 06:22:34.526458 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-04-05 06:22:34.526466 | orchestrator | 2026-04-05 06:22:34.526474 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-05 06:22:34.526482 | orchestrator | Sunday 05 April 2026 06:22:02 +0000 (0:00:01.151) 1:08:39.407 ********** 2026-04-05 06:22:34.526490 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-04-05 06:22:34.526498 | orchestrator | 2026-04-05 06:22:34.526505 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-05 06:22:34.526513 | orchestrator | Sunday 05 April 2026 06:22:03 +0000 (0:00:01.140) 1:08:40.548 ********** 2026-04-05 06:22:34.526521 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:22:34.526530 | orchestrator | 2026-04-05 06:22:34.526538 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-05 06:22:34.526545 | orchestrator | Sunday 05 April 2026 06:22:05 +0000 (0:00:02.156) 1:08:42.705 ********** 2026-04-05 06:22:34.526553 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:22:34.526561 | orchestrator | 2026-04-05 06:22:34.526569 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-05 06:22:34.526577 | orchestrator | Sunday 05 April 2026 06:22:07 +0000 (0:00:01.939) 1:08:44.644 ********** 2026-04-05 06:22:34.526584 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:22:34.526592 | orchestrator | 2026-04-05 06:22:34.526600 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-05 06:22:34.526608 | orchestrator | Sunday 05 April 2026 06:22:10 +0000 (0:00:02.233) 1:08:46.878 ********** 2026-04-05 06:22:34.526616 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-05 06:22:34.526646 | orchestrator | 2026-04-05 06:22:34.526655 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-05 06:22:34.526665 | orchestrator | 2026-04-05 06:22:34.526675 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:22:34.526684 | orchestrator | Sunday 05 April 2026 06:22:13 +0000 (0:00:03.159) 1:08:50.037 ********** 2026-04-05 06:22:34.526693 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-05 06:22:34.526705 | orchestrator | 2026-04-05 06:22:34.526719 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-05 06:22:34.526731 | orchestrator | Sunday 05 April 2026 06:22:14 +0000 (0:00:01.138) 1:08:51.175 ********** 2026-04-05 06:22:34.526744 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.526757 | orchestrator | 2026-04-05 06:22:34.526771 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-05 06:22:34.526801 | orchestrator | Sunday 05 April 2026 06:22:16 +0000 (0:00:01.544) 1:08:52.720 ********** 2026-04-05 06:22:34.526815 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.526829 | orchestrator | 2026-04-05 06:22:34.526839 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:22:34.526848 | orchestrator | Sunday 05 April 2026 06:22:17 +0000 (0:00:01.151) 1:08:53.872 ********** 2026-04-05 06:22:34.526857 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.526866 | orchestrator | 2026-04-05 06:22:34.526875 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:22:34.526884 | orchestrator | Sunday 05 April 2026 06:22:18 +0000 (0:00:01.508) 1:08:55.380 ********** 2026-04-05 06:22:34.526893 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.526902 | orchestrator | 2026-04-05 06:22:34.526926 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-05 06:22:34.526936 | orchestrator | Sunday 05 April 2026 06:22:19 +0000 (0:00:01.141) 1:08:56.522 ********** 2026-04-05 06:22:34.526945 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.526954 | orchestrator | 2026-04-05 06:22:34.526962 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-05 06:22:34.526970 | orchestrator | Sunday 05 April 2026 06:22:20 +0000 (0:00:01.192) 1:08:57.715 ********** 2026-04-05 06:22:34.526978 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.526986 | orchestrator | 2026-04-05 06:22:34.526994 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-05 06:22:34.527002 | orchestrator | Sunday 05 April 2026 06:22:22 +0000 (0:00:01.156) 1:08:58.871 ********** 2026-04-05 06:22:34.527009 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:34.527017 | orchestrator | 2026-04-05 06:22:34.527025 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-05 06:22:34.527032 | orchestrator | Sunday 05 April 2026 06:22:23 +0000 (0:00:01.144) 1:09:00.016 ********** 2026-04-05 06:22:34.527040 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.527048 | orchestrator | 2026-04-05 06:22:34.527056 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-05 06:22:34.527063 | orchestrator | Sunday 05 April 2026 06:22:24 +0000 (0:00:01.138) 1:09:01.155 ********** 2026-04-05 06:22:34.527071 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:22:34.527079 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:22:34.527087 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:22:34.527094 | orchestrator | 2026-04-05 06:22:34.527102 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-05 06:22:34.527110 | orchestrator | Sunday 05 April 2026 06:22:26 +0000 (0:00:02.133) 1:09:03.288 ********** 2026-04-05 06:22:34.527118 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:34.527125 | orchestrator | 2026-04-05 06:22:34.527133 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-05 06:22:34.527153 | orchestrator | Sunday 05 April 2026 06:22:27 +0000 (0:00:01.303) 1:09:04.592 ********** 2026-04-05 06:22:34.527161 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:22:34.527168 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:22:34.527176 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:22:34.527184 | orchestrator | 2026-04-05 06:22:34.527192 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-05 06:22:34.527200 | orchestrator | Sunday 05 April 2026 06:22:31 +0000 (0:00:03.433) 1:09:08.025 ********** 2026-04-05 06:22:34.527207 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 06:22:34.527216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 06:22:34.527223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 06:22:34.527231 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:34.527239 | orchestrator | 2026-04-05 06:22:34.527246 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-05 06:22:34.527254 | orchestrator | Sunday 05 April 2026 06:22:32 +0000 (0:00:01.450) 1:09:09.476 ********** 2026-04-05 06:22:34.527264 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-05 06:22:34.527275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-05 06:22:34.527283 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-05 06:22:34.527291 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:34.527317 | orchestrator | 2026-04-05 06:22:34.527326 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-05 06:22:34.527334 | orchestrator | Sunday 05 April 2026 06:22:34 +0000 (0:00:01.682) 1:09:11.158 ********** 2026-04-05 06:22:34.527349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:34.527366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:54.968826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:54.968934 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.968951 | orchestrator | 2026-04-05 06:22:54.968963 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-05 06:22:54.968976 | orchestrator | Sunday 05 April 2026 06:22:35 +0000 (0:00:01.202) 1:09:12.361 ********** 2026-04-05 06:22:54.969015 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f92e3d403988', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-05 06:22:28.825237', 'end': '2026-04-05 06:22:28.869264', 'delta': '0:00:00.044027', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f92e3d403988'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-05 06:22:54.969031 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9dc375a9b789', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-05 06:22:29.386360', 'end': '2026-04-05 06:22:29.436429', 'delta': '0:00:00.050069', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9dc375a9b789'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-05 06:22:54.969061 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f7093b7c0357', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-05 06:22:30.013161', 'end': '2026-04-05 06:22:30.055441', 'delta': '0:00:00.042280', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f7093b7c0357'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-05 06:22:54.969073 | orchestrator | 2026-04-05 06:22:54.969084 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-05 06:22:54.969095 | orchestrator | Sunday 05 April 2026 06:22:36 +0000 (0:00:01.235) 1:09:13.596 ********** 2026-04-05 06:22:54.969106 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:54.969118 | orchestrator | 2026-04-05 06:22:54.969128 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-05 06:22:54.969139 | orchestrator | Sunday 05 April 2026 06:22:38 +0000 (0:00:01.318) 1:09:14.915 ********** 2026-04-05 06:22:54.969150 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.969161 | orchestrator | 2026-04-05 06:22:54.969171 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-05 06:22:54.969182 | orchestrator | Sunday 05 April 2026 06:22:39 +0000 (0:00:01.300) 1:09:16.215 ********** 2026-04-05 06:22:54.969193 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:54.969203 | orchestrator | 2026-04-05 06:22:54.969225 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-05 06:22:54.969236 | orchestrator | Sunday 05 April 2026 06:22:40 +0000 (0:00:01.212) 1:09:17.427 ********** 2026-04-05 06:22:54.969247 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:22:54.969257 | orchestrator | 2026-04-05 06:22:54.969294 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:22:54.969306 | orchestrator | Sunday 05 April 2026 06:22:42 +0000 (0:00:02.023) 1:09:19.451 ********** 2026-04-05 06:22:54.969316 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:54.969327 | orchestrator | 2026-04-05 06:22:54.969338 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-05 06:22:54.969357 | orchestrator | Sunday 05 April 2026 06:22:43 +0000 (0:00:01.161) 1:09:20.613 ********** 2026-04-05 06:22:54.969389 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.969401 | orchestrator | 2026-04-05 06:22:54.969414 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-05 06:22:54.969426 | orchestrator | Sunday 05 April 2026 06:22:45 +0000 (0:00:01.180) 1:09:21.794 ********** 2026-04-05 06:22:54.969439 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.969452 | orchestrator | 2026-04-05 06:22:54.969465 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-05 06:22:54.969477 | orchestrator | Sunday 05 April 2026 06:22:46 +0000 (0:00:01.263) 1:09:23.058 ********** 2026-04-05 06:22:54.969489 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.969503 | orchestrator | 2026-04-05 06:22:54.969523 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-05 06:22:54.969541 | orchestrator | Sunday 05 April 2026 06:22:47 +0000 (0:00:01.167) 1:09:24.225 ********** 2026-04-05 06:22:54.969560 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.969578 | orchestrator | 2026-04-05 06:22:54.969598 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-05 06:22:54.969617 | orchestrator | Sunday 05 April 2026 06:22:48 +0000 (0:00:01.312) 1:09:25.537 ********** 2026-04-05 06:22:54.969634 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:54.969646 | orchestrator | 2026-04-05 06:22:54.969656 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-05 06:22:54.969667 | orchestrator | Sunday 05 April 2026 06:22:50 +0000 (0:00:01.222) 1:09:26.759 ********** 2026-04-05 06:22:54.969677 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.969688 | orchestrator | 2026-04-05 06:22:54.969698 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-05 06:22:54.969709 | orchestrator | Sunday 05 April 2026 06:22:51 +0000 (0:00:01.164) 1:09:27.924 ********** 2026-04-05 06:22:54.969719 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:54.969730 | orchestrator | 2026-04-05 06:22:54.969741 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-05 06:22:54.969751 | orchestrator | Sunday 05 April 2026 06:22:52 +0000 (0:00:01.261) 1:09:29.185 ********** 2026-04-05 06:22:54.969762 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:54.969772 | orchestrator | 2026-04-05 06:22:54.969783 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-05 06:22:54.969794 | orchestrator | Sunday 05 April 2026 06:22:53 +0000 (0:00:01.142) 1:09:30.328 ********** 2026-04-05 06:22:54.969805 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:22:54.969815 | orchestrator | 2026-04-05 06:22:54.969826 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-05 06:22:54.969836 | orchestrator | Sunday 05 April 2026 06:22:54 +0000 (0:00:01.228) 1:09:31.556 ********** 2026-04-05 06:22:54.969848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:54.969861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}})  2026-04-05 06:22:54.969881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:22:54.969903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}})  2026-04-05 06:22:55.094057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-05 06:22:55.094243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}})  2026-04-05 06:22:55.094373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}})  2026-04-05 06:22:55.094383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-05 06:22:55.094417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-05 06:22:55.094455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-05 06:22:56.531705 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:22:56.531822 | orchestrator | 2026-04-05 06:22:56.531842 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-05 06:22:56.531855 | orchestrator | Sunday 05 April 2026 06:22:56 +0000 (0:00:01.447) 1:09:33.003 ********** 2026-04-05 06:22:56.531870 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.531887 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3', 'dm-uuid-LVM-9UezPogn4iBZe6P7qQrqwzha86NqStuHDCD4nKQoWHbWxoZMb1fslfN29VoZpcJs'], 'uuids': ['6a14875d-bd0b-4c06-a83b-3b78425422b8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.531900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d', 'scsi-SQEMU_QEMU_HARDDISK_19b95bad-a78c-4860-8023-fde2f6985c3d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '19b95bad', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.531950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-AEfaZd-XwNv-jeZy-Y4Dl-URGR-AcQK-h04IOd', 'scsi-0QEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564', 'scsi-SQEMU_QEMU_HARDDISK_ff9c3d73-f1cc-45bb-b790-5886e9656564'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.531985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.531998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.532010 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-05-01-47-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.532023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.532043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3', 'dm-uuid-CRYPT-LUKS2-c32619169d0d4a8291361c2d88108b6f-JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.532059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:22:56.532078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ee367cf6--46c0--523d--847e--ea936940168f-osd--block--ee367cf6--46c0--523d--847e--ea936940168f', 'dm-uuid-LVM-TS1OCZvjwyyyxpFhTIhlzCxOf0BiA8DYJnQXnuIqpoQKKss6VUeJbrqrfgmXBpX3'], 'uuids': ['c3261916-9d0d-4a82-9136-1c2d88108b6f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ff9c3d73', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JnQXnu-Iqpo-QKKs-s6VU-eJbr-qrfg-mXBpX3']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:23:09.172301 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-4zjw41-ixri-tYlP-A49F-paSc-aVGf-RmixeH', 'scsi-0QEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4', 'scsi-SQEMU_QEMU_HARDDISK_0b219c4b-918e-4afd-b52e-bcd8400111e4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b219c4b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d286f04f--da20--50d3--800d--bbe3052cfbc3-osd--block--d286f04f--da20--50d3--800d--bbe3052cfbc3']}}, 'ansible_loop_var': 'item'})  2026-04-05 06:23:09.172427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:23:09.172491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '20d4ddc2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1', 'scsi-SQEMU_QEMU_HARDDISK_20d4ddc2-780d-4d78-9e94-d8812351d131-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:23:09.172528 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:23:09.172541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:23:09.172554 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs', 'dm-uuid-CRYPT-LUKS2-6a14875dbd0b4c06a83b3b78425422b8-DCD4nK-QoWH-bWxo-ZMb1-fslf-N29V-oZpcJs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-05 06:23:09.172577 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:09.172589 | orchestrator | 2026-04-05 06:23:09.172601 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-05 06:23:09.172613 | orchestrator | Sunday 05 April 2026 06:22:57 +0000 (0:00:01.426) 1:09:34.430 ********** 2026-04-05 06:23:09.172624 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:09.172636 | orchestrator | 2026-04-05 06:23:09.172648 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-05 06:23:09.172659 | orchestrator | Sunday 05 April 2026 06:22:59 +0000 (0:00:01.577) 1:09:36.007 ********** 2026-04-05 06:23:09.172670 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:09.172680 | orchestrator | 2026-04-05 06:23:09.172691 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:23:09.172710 | orchestrator | Sunday 05 April 2026 06:23:00 +0000 (0:00:01.158) 1:09:37.166 ********** 2026-04-05 06:23:09.172729 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:09.172757 | orchestrator | 2026-04-05 06:23:09.172783 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:23:09.172803 | orchestrator | Sunday 05 April 2026 06:23:01 +0000 (0:00:01.505) 1:09:38.671 ********** 2026-04-05 06:23:09.172825 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:09.172846 | orchestrator | 2026-04-05 06:23:09.172868 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-05 06:23:09.172889 | orchestrator | Sunday 05 April 2026 06:23:03 +0000 (0:00:01.123) 1:09:39.795 ********** 2026-04-05 06:23:09.172907 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:09.172920 | orchestrator | 2026-04-05 06:23:09.172940 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-05 06:23:09.172954 | orchestrator | Sunday 05 April 2026 06:23:04 +0000 (0:00:01.334) 1:09:41.130 ********** 2026-04-05 06:23:09.172966 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:09.172979 | orchestrator | 2026-04-05 06:23:09.172991 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-05 06:23:09.173005 | orchestrator | Sunday 05 April 2026 06:23:05 +0000 (0:00:01.477) 1:09:42.607 ********** 2026-04-05 06:23:09.173018 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-05 06:23:09.173032 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-05 06:23:09.173045 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-05 06:23:09.173057 | orchestrator | 2026-04-05 06:23:09.173089 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-05 06:23:09.173113 | orchestrator | Sunday 05 April 2026 06:23:07 +0000 (0:00:01.850) 1:09:44.457 ********** 2026-04-05 06:23:09.173126 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-05 06:23:09.173137 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-05 06:23:09.173148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-05 06:23:09.173159 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:09.173170 | orchestrator | 2026-04-05 06:23:09.173180 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-05 06:23:09.173191 | orchestrator | Sunday 05 April 2026 06:23:08 +0000 (0:00:01.165) 1:09:45.623 ********** 2026-04-05 06:23:09.173202 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-05 06:23:09.173225 | orchestrator | 2026-04-05 06:23:09.173246 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:23:52.951725 | orchestrator | Sunday 05 April 2026 06:23:10 +0000 (0:00:01.176) 1:09:46.800 ********** 2026-04-05 06:23:52.951838 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.951855 | orchestrator | 2026-04-05 06:23:52.951868 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:23:52.951879 | orchestrator | Sunday 05 April 2026 06:23:11 +0000 (0:00:01.140) 1:09:47.941 ********** 2026-04-05 06:23:52.951891 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.951902 | orchestrator | 2026-04-05 06:23:52.951913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:23:52.951924 | orchestrator | Sunday 05 April 2026 06:23:12 +0000 (0:00:01.196) 1:09:49.137 ********** 2026-04-05 06:23:52.951934 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.951945 | orchestrator | 2026-04-05 06:23:52.951956 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:23:52.951967 | orchestrator | Sunday 05 April 2026 06:23:13 +0000 (0:00:01.164) 1:09:50.302 ********** 2026-04-05 06:23:52.951977 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.951989 | orchestrator | 2026-04-05 06:23:52.952000 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:23:52.952011 | orchestrator | Sunday 05 April 2026 06:23:14 +0000 (0:00:01.293) 1:09:51.595 ********** 2026-04-05 06:23:52.952022 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:23:52.952033 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:23:52.952044 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:23:52.952054 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.952065 | orchestrator | 2026-04-05 06:23:52.952076 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:23:52.952087 | orchestrator | Sunday 05 April 2026 06:23:16 +0000 (0:00:01.521) 1:09:53.116 ********** 2026-04-05 06:23:52.952097 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:23:52.952108 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:23:52.952119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:23:52.952129 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.952140 | orchestrator | 2026-04-05 06:23:52.952151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:23:52.952161 | orchestrator | Sunday 05 April 2026 06:23:17 +0000 (0:00:01.458) 1:09:54.575 ********** 2026-04-05 06:23:52.952172 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:23:52.952183 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:23:52.952218 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:23:52.952229 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.952241 | orchestrator | 2026-04-05 06:23:52.952252 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:23:52.952264 | orchestrator | Sunday 05 April 2026 06:23:19 +0000 (0:00:01.448) 1:09:56.024 ********** 2026-04-05 06:23:52.952276 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.952290 | orchestrator | 2026-04-05 06:23:52.952303 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:23:52.952317 | orchestrator | Sunday 05 April 2026 06:23:20 +0000 (0:00:01.247) 1:09:57.271 ********** 2026-04-05 06:23:52.952330 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 06:23:52.952342 | orchestrator | 2026-04-05 06:23:52.952355 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-05 06:23:52.952368 | orchestrator | Sunday 05 April 2026 06:23:22 +0000 (0:00:01.936) 1:09:59.207 ********** 2026-04-05 06:23:52.952381 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:23:52.952419 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:23:52.952433 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:23:52.952459 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:23:52.952473 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:23:52.952485 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-05 06:23:52.952498 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:23:52.952510 | orchestrator | 2026-04-05 06:23:52.952523 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-05 06:23:52.952535 | orchestrator | Sunday 05 April 2026 06:23:24 +0000 (0:00:02.448) 1:10:01.656 ********** 2026-04-05 06:23:52.952548 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-05 06:23:52.952561 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-05 06:23:52.952573 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-05 06:23:52.952586 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-05 06:23:52.952599 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-05 06:23:52.952611 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-05 06:23:52.952623 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-05 06:23:52.952634 | orchestrator | 2026-04-05 06:23:52.952644 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-05 06:23:52.952655 | orchestrator | Sunday 05 April 2026 06:23:27 +0000 (0:00:02.434) 1:10:04.090 ********** 2026-04-05 06:23:52.952666 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:23:52.952677 | orchestrator | 2026-04-05 06:23:52.952704 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-05 06:23:52.952716 | orchestrator | Sunday 05 April 2026 06:23:29 +0000 (0:00:01.957) 1:10:06.047 ********** 2026-04-05 06:23:52.952727 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:23:52.952739 | orchestrator | 2026-04-05 06:23:52.952750 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-05 06:23:52.952760 | orchestrator | Sunday 05 April 2026 06:23:31 +0000 (0:00:02.490) 1:10:08.538 ********** 2026-04-05 06:23:52.952771 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:23:52.952782 | orchestrator | 2026-04-05 06:23:52.952793 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 06:23:52.952803 | orchestrator | Sunday 05 April 2026 06:23:33 +0000 (0:00:01.962) 1:10:10.500 ********** 2026-04-05 06:23:52.952814 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-05 06:23:52.952825 | orchestrator | 2026-04-05 06:23:52.952836 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 06:23:52.952846 | orchestrator | Sunday 05 April 2026 06:23:34 +0000 (0:00:01.163) 1:10:11.664 ********** 2026-04-05 06:23:52.952857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-05 06:23:52.952868 | orchestrator | 2026-04-05 06:23:52.952878 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 06:23:52.952889 | orchestrator | Sunday 05 April 2026 06:23:36 +0000 (0:00:01.209) 1:10:12.873 ********** 2026-04-05 06:23:52.952900 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.952910 | orchestrator | 2026-04-05 06:23:52.952921 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 06:23:52.952941 | orchestrator | Sunday 05 April 2026 06:23:37 +0000 (0:00:01.188) 1:10:14.062 ********** 2026-04-05 06:23:52.952952 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.952962 | orchestrator | 2026-04-05 06:23:52.952973 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 06:23:52.952984 | orchestrator | Sunday 05 April 2026 06:23:38 +0000 (0:00:01.511) 1:10:15.573 ********** 2026-04-05 06:23:52.952994 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.953005 | orchestrator | 2026-04-05 06:23:52.953016 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 06:23:52.953026 | orchestrator | Sunday 05 April 2026 06:23:40 +0000 (0:00:01.572) 1:10:17.145 ********** 2026-04-05 06:23:52.953037 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.953048 | orchestrator | 2026-04-05 06:23:52.953059 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 06:23:52.953069 | orchestrator | Sunday 05 April 2026 06:23:42 +0000 (0:00:01.751) 1:10:18.897 ********** 2026-04-05 06:23:52.953080 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.953090 | orchestrator | 2026-04-05 06:23:52.953101 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 06:23:52.953112 | orchestrator | Sunday 05 April 2026 06:23:43 +0000 (0:00:01.124) 1:10:20.021 ********** 2026-04-05 06:23:52.953122 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.953133 | orchestrator | 2026-04-05 06:23:52.953144 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 06:23:52.953154 | orchestrator | Sunday 05 April 2026 06:23:44 +0000 (0:00:01.214) 1:10:21.236 ********** 2026-04-05 06:23:52.953165 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.953175 | orchestrator | 2026-04-05 06:23:52.953204 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 06:23:52.953215 | orchestrator | Sunday 05 April 2026 06:23:45 +0000 (0:00:01.172) 1:10:22.409 ********** 2026-04-05 06:23:52.953226 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.953237 | orchestrator | 2026-04-05 06:23:52.953247 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 06:23:52.953258 | orchestrator | Sunday 05 April 2026 06:23:47 +0000 (0:00:01.544) 1:10:23.954 ********** 2026-04-05 06:23:52.953274 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.953285 | orchestrator | 2026-04-05 06:23:52.953296 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 06:23:52.953307 | orchestrator | Sunday 05 April 2026 06:23:48 +0000 (0:00:01.546) 1:10:25.500 ********** 2026-04-05 06:23:52.953318 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.953328 | orchestrator | 2026-04-05 06:23:52.953339 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 06:23:52.953350 | orchestrator | Sunday 05 April 2026 06:23:49 +0000 (0:00:00.756) 1:10:26.257 ********** 2026-04-05 06:23:52.953360 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.953371 | orchestrator | 2026-04-05 06:23:52.953382 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 06:23:52.953393 | orchestrator | Sunday 05 April 2026 06:23:50 +0000 (0:00:00.810) 1:10:27.067 ********** 2026-04-05 06:23:52.953404 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.953414 | orchestrator | 2026-04-05 06:23:52.953425 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 06:23:52.953436 | orchestrator | Sunday 05 April 2026 06:23:51 +0000 (0:00:00.835) 1:10:27.903 ********** 2026-04-05 06:23:52.953446 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.953457 | orchestrator | 2026-04-05 06:23:52.953468 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 06:23:52.953479 | orchestrator | Sunday 05 April 2026 06:23:51 +0000 (0:00:00.803) 1:10:28.707 ********** 2026-04-05 06:23:52.953490 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:23:52.953500 | orchestrator | 2026-04-05 06:23:52.953511 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 06:23:52.953532 | orchestrator | Sunday 05 April 2026 06:23:52 +0000 (0:00:00.810) 1:10:29.517 ********** 2026-04-05 06:23:52.953543 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:23:52.953554 | orchestrator | 2026-04-05 06:23:52.953570 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 06:24:35.173812 | orchestrator | Sunday 05 April 2026 06:23:53 +0000 (0:00:00.786) 1:10:30.304 ********** 2026-04-05 06:24:35.173927 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.173943 | orchestrator | 2026-04-05 06:24:35.173956 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 06:24:35.173967 | orchestrator | Sunday 05 April 2026 06:23:54 +0000 (0:00:00.774) 1:10:31.079 ********** 2026-04-05 06:24:35.173979 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.173990 | orchestrator | 2026-04-05 06:24:35.174001 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 06:24:35.174012 | orchestrator | Sunday 05 April 2026 06:23:55 +0000 (0:00:01.044) 1:10:32.123 ********** 2026-04-05 06:24:35.174084 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.174097 | orchestrator | 2026-04-05 06:24:35.174108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 06:24:35.174119 | orchestrator | Sunday 05 April 2026 06:23:56 +0000 (0:00:00.863) 1:10:32.987 ********** 2026-04-05 06:24:35.174186 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.174200 | orchestrator | 2026-04-05 06:24:35.174211 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-05 06:24:35.174227 | orchestrator | Sunday 05 April 2026 06:23:57 +0000 (0:00:00.799) 1:10:33.786 ********** 2026-04-05 06:24:35.174246 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174264 | orchestrator | 2026-04-05 06:24:35.174282 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-05 06:24:35.174302 | orchestrator | Sunday 05 April 2026 06:23:57 +0000 (0:00:00.792) 1:10:34.579 ********** 2026-04-05 06:24:35.174322 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174342 | orchestrator | 2026-04-05 06:24:35.174362 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-05 06:24:35.174381 | orchestrator | Sunday 05 April 2026 06:23:58 +0000 (0:00:00.837) 1:10:35.416 ********** 2026-04-05 06:24:35.174398 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174411 | orchestrator | 2026-04-05 06:24:35.174425 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-05 06:24:35.174437 | orchestrator | Sunday 05 April 2026 06:23:59 +0000 (0:00:00.808) 1:10:36.225 ********** 2026-04-05 06:24:35.174449 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174462 | orchestrator | 2026-04-05 06:24:35.174474 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-05 06:24:35.174487 | orchestrator | Sunday 05 April 2026 06:24:00 +0000 (0:00:00.802) 1:10:37.027 ********** 2026-04-05 06:24:35.174499 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174511 | orchestrator | 2026-04-05 06:24:35.174524 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-05 06:24:35.174537 | orchestrator | Sunday 05 April 2026 06:24:01 +0000 (0:00:00.777) 1:10:37.804 ********** 2026-04-05 06:24:35.174550 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174562 | orchestrator | 2026-04-05 06:24:35.174574 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-05 06:24:35.174588 | orchestrator | Sunday 05 April 2026 06:24:01 +0000 (0:00:00.771) 1:10:38.576 ********** 2026-04-05 06:24:35.174603 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174625 | orchestrator | 2026-04-05 06:24:35.174655 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-05 06:24:35.174672 | orchestrator | Sunday 05 April 2026 06:24:02 +0000 (0:00:00.784) 1:10:39.361 ********** 2026-04-05 06:24:35.174689 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174707 | orchestrator | 2026-04-05 06:24:35.174724 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-05 06:24:35.174775 | orchestrator | Sunday 05 April 2026 06:24:03 +0000 (0:00:00.833) 1:10:40.194 ********** 2026-04-05 06:24:35.174793 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174809 | orchestrator | 2026-04-05 06:24:35.174826 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-05 06:24:35.174843 | orchestrator | Sunday 05 April 2026 06:24:04 +0000 (0:00:00.845) 1:10:41.040 ********** 2026-04-05 06:24:35.174860 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174878 | orchestrator | 2026-04-05 06:24:35.174915 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-05 06:24:35.174934 | orchestrator | Sunday 05 April 2026 06:24:05 +0000 (0:00:01.037) 1:10:42.077 ********** 2026-04-05 06:24:35.174949 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.174961 | orchestrator | 2026-04-05 06:24:35.174972 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-05 06:24:35.174982 | orchestrator | Sunday 05 April 2026 06:24:06 +0000 (0:00:00.855) 1:10:42.933 ********** 2026-04-05 06:24:35.174993 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175004 | orchestrator | 2026-04-05 06:24:35.175014 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-05 06:24:35.175025 | orchestrator | Sunday 05 April 2026 06:24:06 +0000 (0:00:00.782) 1:10:43.715 ********** 2026-04-05 06:24:35.175035 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.175046 | orchestrator | 2026-04-05 06:24:35.175057 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-05 06:24:35.175067 | orchestrator | Sunday 05 April 2026 06:24:08 +0000 (0:00:01.565) 1:10:45.280 ********** 2026-04-05 06:24:35.175078 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.175089 | orchestrator | 2026-04-05 06:24:35.175099 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-05 06:24:35.175110 | orchestrator | Sunday 05 April 2026 06:24:10 +0000 (0:00:01.941) 1:10:47.222 ********** 2026-04-05 06:24:35.175121 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-05 06:24:35.175161 | orchestrator | 2026-04-05 06:24:35.175173 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-05 06:24:35.175184 | orchestrator | Sunday 05 April 2026 06:24:11 +0000 (0:00:01.243) 1:10:48.466 ********** 2026-04-05 06:24:35.175195 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175206 | orchestrator | 2026-04-05 06:24:35.175216 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-05 06:24:35.175247 | orchestrator | Sunday 05 April 2026 06:24:12 +0000 (0:00:01.148) 1:10:49.615 ********** 2026-04-05 06:24:35.175259 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175270 | orchestrator | 2026-04-05 06:24:35.175281 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-05 06:24:35.175300 | orchestrator | Sunday 05 April 2026 06:24:14 +0000 (0:00:01.148) 1:10:50.764 ********** 2026-04-05 06:24:35.175318 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-05 06:24:35.175336 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-05 06:24:35.175355 | orchestrator | 2026-04-05 06:24:35.175375 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-05 06:24:35.175393 | orchestrator | Sunday 05 April 2026 06:24:15 +0000 (0:00:01.839) 1:10:52.603 ********** 2026-04-05 06:24:35.175411 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.175422 | orchestrator | 2026-04-05 06:24:35.175433 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-05 06:24:35.175444 | orchestrator | Sunday 05 April 2026 06:24:17 +0000 (0:00:01.453) 1:10:54.057 ********** 2026-04-05 06:24:35.175454 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175465 | orchestrator | 2026-04-05 06:24:35.175475 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-05 06:24:35.175486 | orchestrator | Sunday 05 April 2026 06:24:18 +0000 (0:00:01.144) 1:10:55.201 ********** 2026-04-05 06:24:35.175508 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175519 | orchestrator | 2026-04-05 06:24:35.175529 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-05 06:24:35.175540 | orchestrator | Sunday 05 April 2026 06:24:19 +0000 (0:00:00.803) 1:10:56.005 ********** 2026-04-05 06:24:35.175550 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175562 | orchestrator | 2026-04-05 06:24:35.175580 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-05 06:24:35.175609 | orchestrator | Sunday 05 April 2026 06:24:20 +0000 (0:00:00.913) 1:10:56.919 ********** 2026-04-05 06:24:35.175628 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-05 06:24:35.175645 | orchestrator | 2026-04-05 06:24:35.175664 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-05 06:24:35.175681 | orchestrator | Sunday 05 April 2026 06:24:21 +0000 (0:00:01.144) 1:10:58.063 ********** 2026-04-05 06:24:35.175699 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.175717 | orchestrator | 2026-04-05 06:24:35.175735 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-05 06:24:35.175754 | orchestrator | Sunday 05 April 2026 06:24:23 +0000 (0:00:01.731) 1:10:59.795 ********** 2026-04-05 06:24:35.175773 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-05 06:24:35.175791 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-05 06:24:35.175808 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-05 06:24:35.175819 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175830 | orchestrator | 2026-04-05 06:24:35.175840 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-05 06:24:35.175851 | orchestrator | Sunday 05 April 2026 06:24:24 +0000 (0:00:01.155) 1:11:00.950 ********** 2026-04-05 06:24:35.175862 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175872 | orchestrator | 2026-04-05 06:24:35.175883 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-05 06:24:35.175894 | orchestrator | Sunday 05 April 2026 06:24:25 +0000 (0:00:01.214) 1:11:02.165 ********** 2026-04-05 06:24:35.175904 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175915 | orchestrator | 2026-04-05 06:24:35.175925 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-05 06:24:35.175936 | orchestrator | Sunday 05 April 2026 06:24:26 +0000 (0:00:01.204) 1:11:03.370 ********** 2026-04-05 06:24:35.175947 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.175957 | orchestrator | 2026-04-05 06:24:35.175976 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-05 06:24:35.175987 | orchestrator | Sunday 05 April 2026 06:24:27 +0000 (0:00:01.133) 1:11:04.503 ********** 2026-04-05 06:24:35.175997 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.176008 | orchestrator | 2026-04-05 06:24:35.176018 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-05 06:24:35.176029 | orchestrator | Sunday 05 April 2026 06:24:28 +0000 (0:00:01.126) 1:11:05.629 ********** 2026-04-05 06:24:35.176040 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.176050 | orchestrator | 2026-04-05 06:24:35.176061 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-05 06:24:35.176072 | orchestrator | Sunday 05 April 2026 06:24:29 +0000 (0:00:00.830) 1:11:06.460 ********** 2026-04-05 06:24:35.176082 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.176093 | orchestrator | 2026-04-05 06:24:35.176104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-05 06:24:35.176115 | orchestrator | Sunday 05 April 2026 06:24:31 +0000 (0:00:02.072) 1:11:08.532 ********** 2026-04-05 06:24:35.176125 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:24:35.176190 | orchestrator | 2026-04-05 06:24:35.176201 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-05 06:24:35.176222 | orchestrator | Sunday 05 April 2026 06:24:32 +0000 (0:00:00.783) 1:11:09.315 ********** 2026-04-05 06:24:35.176233 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-05 06:24:35.176244 | orchestrator | 2026-04-05 06:24:35.176255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-05 06:24:35.176265 | orchestrator | Sunday 05 April 2026 06:24:34 +0000 (0:00:01.414) 1:11:10.729 ********** 2026-04-05 06:24:35.176276 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:24:35.176287 | orchestrator | 2026-04-05 06:24:35.176297 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-05 06:24:35.176319 | orchestrator | Sunday 05 April 2026 06:24:35 +0000 (0:00:01.151) 1:11:11.881 ********** 2026-04-05 06:25:17.222784 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.222929 | orchestrator | 2026-04-05 06:25:17.222957 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-05 06:25:17.222980 | orchestrator | Sunday 05 April 2026 06:24:36 +0000 (0:00:01.124) 1:11:13.006 ********** 2026-04-05 06:25:17.223000 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.223021 | orchestrator | 2026-04-05 06:25:17.223041 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-05 06:25:17.223061 | orchestrator | Sunday 05 April 2026 06:24:37 +0000 (0:00:01.174) 1:11:14.181 ********** 2026-04-05 06:25:17.223129 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.223148 | orchestrator | 2026-04-05 06:25:17.223166 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-05 06:25:17.223183 | orchestrator | Sunday 05 April 2026 06:24:38 +0000 (0:00:01.165) 1:11:15.346 ********** 2026-04-05 06:25:17.223199 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.223216 | orchestrator | 2026-04-05 06:25:17.223233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-05 06:25:17.223250 | orchestrator | Sunday 05 April 2026 06:24:39 +0000 (0:00:01.135) 1:11:16.482 ********** 2026-04-05 06:25:17.223268 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.223288 | orchestrator | 2026-04-05 06:25:17.223309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-05 06:25:17.223331 | orchestrator | Sunday 05 April 2026 06:24:40 +0000 (0:00:01.192) 1:11:17.674 ********** 2026-04-05 06:25:17.223355 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.223377 | orchestrator | 2026-04-05 06:25:17.223398 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-05 06:25:17.223419 | orchestrator | Sunday 05 April 2026 06:24:42 +0000 (0:00:01.166) 1:11:18.841 ********** 2026-04-05 06:25:17.223440 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.223460 | orchestrator | 2026-04-05 06:25:17.223480 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-05 06:25:17.223501 | orchestrator | Sunday 05 April 2026 06:24:43 +0000 (0:00:01.185) 1:11:20.027 ********** 2026-04-05 06:25:17.223524 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:25:17.223545 | orchestrator | 2026-04-05 06:25:17.223566 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-05 06:25:17.223585 | orchestrator | Sunday 05 April 2026 06:24:44 +0000 (0:00:00.835) 1:11:20.862 ********** 2026-04-05 06:25:17.223605 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-05 06:25:17.223627 | orchestrator | 2026-04-05 06:25:17.223645 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-05 06:25:17.223664 | orchestrator | Sunday 05 April 2026 06:24:45 +0000 (0:00:01.144) 1:11:22.007 ********** 2026-04-05 06:25:17.223683 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-05 06:25:17.223704 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-05 06:25:17.223725 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-05 06:25:17.223746 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-05 06:25:17.223802 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-05 06:25:17.223824 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-05 06:25:17.223845 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-05 06:25:17.223866 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-05 06:25:17.223887 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-05 06:25:17.223906 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-05 06:25:17.223927 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-05 06:25:17.223949 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-05 06:25:17.223969 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-05 06:25:17.224006 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-05 06:25:17.224027 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-05 06:25:17.224048 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-05 06:25:17.224068 | orchestrator | 2026-04-05 06:25:17.224119 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-05 06:25:17.224139 | orchestrator | Sunday 05 April 2026 06:24:51 +0000 (0:00:06.353) 1:11:28.361 ********** 2026-04-05 06:25:17.224235 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-05 06:25:17.224255 | orchestrator | 2026-04-05 06:25:17.224273 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-05 06:25:17.224291 | orchestrator | Sunday 05 April 2026 06:24:52 +0000 (0:00:01.175) 1:11:29.536 ********** 2026-04-05 06:25:17.224311 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:25:17.224330 | orchestrator | 2026-04-05 06:25:17.224348 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-05 06:25:17.224367 | orchestrator | Sunday 05 April 2026 06:24:54 +0000 (0:00:01.533) 1:11:31.070 ********** 2026-04-05 06:25:17.224387 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:25:17.224406 | orchestrator | 2026-04-05 06:25:17.224426 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-05 06:25:17.224444 | orchestrator | Sunday 05 April 2026 06:24:55 +0000 (0:00:01.613) 1:11:32.683 ********** 2026-04-05 06:25:17.224464 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.224484 | orchestrator | 2026-04-05 06:25:17.224505 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-05 06:25:17.224553 | orchestrator | Sunday 05 April 2026 06:24:56 +0000 (0:00:00.832) 1:11:33.516 ********** 2026-04-05 06:25:17.224574 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.224592 | orchestrator | 2026-04-05 06:25:17.224610 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-05 06:25:17.224628 | orchestrator | Sunday 05 April 2026 06:24:57 +0000 (0:00:00.817) 1:11:34.333 ********** 2026-04-05 06:25:17.224645 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.224748 | orchestrator | 2026-04-05 06:25:17.224769 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-05 06:25:17.224788 | orchestrator | Sunday 05 April 2026 06:24:58 +0000 (0:00:00.838) 1:11:35.171 ********** 2026-04-05 06:25:17.224807 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.224826 | orchestrator | 2026-04-05 06:25:17.224845 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-05 06:25:17.224864 | orchestrator | Sunday 05 April 2026 06:24:59 +0000 (0:00:00.757) 1:11:35.929 ********** 2026-04-05 06:25:17.224881 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.224899 | orchestrator | 2026-04-05 06:25:17.224916 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-05 06:25:17.224955 | orchestrator | Sunday 05 April 2026 06:24:59 +0000 (0:00:00.787) 1:11:36.718 ********** 2026-04-05 06:25:17.224974 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.224992 | orchestrator | 2026-04-05 06:25:17.225011 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-05 06:25:17.225030 | orchestrator | Sunday 05 April 2026 06:25:00 +0000 (0:00:00.820) 1:11:37.538 ********** 2026-04-05 06:25:17.225050 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.225565 | orchestrator | 2026-04-05 06:25:17.225582 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-05 06:25:17.225594 | orchestrator | Sunday 05 April 2026 06:25:01 +0000 (0:00:00.808) 1:11:38.347 ********** 2026-04-05 06:25:17.225605 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.225615 | orchestrator | 2026-04-05 06:25:17.225626 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-05 06:25:17.225637 | orchestrator | Sunday 05 April 2026 06:25:02 +0000 (0:00:00.863) 1:11:39.210 ********** 2026-04-05 06:25:17.225647 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.225658 | orchestrator | 2026-04-05 06:25:17.225668 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-05 06:25:17.225678 | orchestrator | Sunday 05 April 2026 06:25:03 +0000 (0:00:00.991) 1:11:40.202 ********** 2026-04-05 06:25:17.225689 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.225699 | orchestrator | 2026-04-05 06:25:17.225710 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-05 06:25:17.225720 | orchestrator | Sunday 05 April 2026 06:25:04 +0000 (0:00:00.794) 1:11:40.997 ********** 2026-04-05 06:25:17.225731 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.225741 | orchestrator | 2026-04-05 06:25:17.225752 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-05 06:25:17.225763 | orchestrator | Sunday 05 April 2026 06:25:05 +0000 (0:00:00.879) 1:11:41.876 ********** 2026-04-05 06:25:17.225773 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-05 06:25:17.225784 | orchestrator | 2026-04-05 06:25:17.225794 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-05 06:25:17.225805 | orchestrator | Sunday 05 April 2026 06:25:09 +0000 (0:00:04.086) 1:11:45.963 ********** 2026-04-05 06:25:17.225816 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:25:17.225827 | orchestrator | 2026-04-05 06:25:17.225838 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-05 06:25:17.225848 | orchestrator | Sunday 05 April 2026 06:25:10 +0000 (0:00:00.809) 1:11:46.772 ********** 2026-04-05 06:25:17.225874 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-05 06:25:17.225888 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-05 06:25:17.225899 | orchestrator | 2026-04-05 06:25:17.225909 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-05 06:25:17.225918 | orchestrator | Sunday 05 April 2026 06:25:14 +0000 (0:00:04.613) 1:11:51.385 ********** 2026-04-05 06:25:17.225928 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.225937 | orchestrator | 2026-04-05 06:25:17.225946 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-05 06:25:17.225956 | orchestrator | Sunday 05 April 2026 06:25:15 +0000 (0:00:00.860) 1:11:52.246 ********** 2026-04-05 06:25:17.225977 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.225986 | orchestrator | 2026-04-05 06:25:17.225996 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-05 06:25:17.226005 | orchestrator | Sunday 05 April 2026 06:25:16 +0000 (0:00:00.829) 1:11:53.075 ********** 2026-04-05 06:25:17.226099 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:25:17.226122 | orchestrator | 2026-04-05 06:25:17.226136 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-05 06:25:17.226163 | orchestrator | Sunday 05 April 2026 06:25:17 +0000 (0:00:00.855) 1:11:53.931 ********** 2026-04-05 06:26:23.899980 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.900132 | orchestrator | 2026-04-05 06:26:23.900150 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-05 06:26:23.900163 | orchestrator | Sunday 05 April 2026 06:25:18 +0000 (0:00:00.792) 1:11:54.724 ********** 2026-04-05 06:26:23.900173 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.900183 | orchestrator | 2026-04-05 06:26:23.900193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-05 06:26:23.900203 | orchestrator | Sunday 05 April 2026 06:25:18 +0000 (0:00:00.846) 1:11:55.571 ********** 2026-04-05 06:26:23.900213 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:26:23.900224 | orchestrator | 2026-04-05 06:26:23.900233 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-05 06:26:23.900243 | orchestrator | Sunday 05 April 2026 06:25:19 +0000 (0:00:01.037) 1:11:56.609 ********** 2026-04-05 06:26:23.900253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:26:23.900264 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:26:23.900273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:26:23.900283 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.900292 | orchestrator | 2026-04-05 06:26:23.900303 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-05 06:26:23.900313 | orchestrator | Sunday 05 April 2026 06:25:21 +0000 (0:00:01.687) 1:11:58.297 ********** 2026-04-05 06:26:23.900323 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:26:23.900332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:26:23.900342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:26:23.900352 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.900361 | orchestrator | 2026-04-05 06:26:23.900371 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-05 06:26:23.900381 | orchestrator | Sunday 05 April 2026 06:25:23 +0000 (0:00:01.728) 1:12:00.025 ********** 2026-04-05 06:26:23.900390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-05 06:26:23.900400 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-05 06:26:23.900410 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-05 06:26:23.900419 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.900429 | orchestrator | 2026-04-05 06:26:23.900439 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-05 06:26:23.900449 | orchestrator | Sunday 05 April 2026 06:25:24 +0000 (0:00:01.140) 1:12:01.166 ********** 2026-04-05 06:26:23.900459 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:26:23.900468 | orchestrator | 2026-04-05 06:26:23.900478 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-05 06:26:23.900487 | orchestrator | Sunday 05 April 2026 06:25:25 +0000 (0:00:00.824) 1:12:01.991 ********** 2026-04-05 06:26:23.900499 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-05 06:26:23.900510 | orchestrator | 2026-04-05 06:26:23.900522 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-05 06:26:23.900533 | orchestrator | Sunday 05 April 2026 06:25:26 +0000 (0:00:01.011) 1:12:03.002 ********** 2026-04-05 06:26:23.900568 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:26:23.900579 | orchestrator | 2026-04-05 06:26:23.900590 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-05 06:26:23.900601 | orchestrator | Sunday 05 April 2026 06:25:27 +0000 (0:00:01.418) 1:12:04.422 ********** 2026-04-05 06:26:23.900612 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-04-05 06:26:23.900624 | orchestrator | 2026-04-05 06:26:23.900636 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 06:26:23.900647 | orchestrator | Sunday 05 April 2026 06:25:28 +0000 (0:00:01.145) 1:12:05.567 ********** 2026-04-05 06:26:23.900675 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:26:23.900693 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 06:26:23.900710 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:26:23.900726 | orchestrator | 2026-04-05 06:26:23.900742 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:26:23.900758 | orchestrator | Sunday 05 April 2026 06:25:32 +0000 (0:00:03.306) 1:12:08.874 ********** 2026-04-05 06:26:23.900775 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:26:23.900791 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-05 06:26:23.900807 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:26:23.900823 | orchestrator | 2026-04-05 06:26:23.900841 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-05 06:26:23.900858 | orchestrator | Sunday 05 April 2026 06:25:34 +0000 (0:00:01.947) 1:12:10.822 ********** 2026-04-05 06:26:23.900875 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.900891 | orchestrator | 2026-04-05 06:26:23.900907 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-05 06:26:23.900923 | orchestrator | Sunday 05 April 2026 06:25:34 +0000 (0:00:00.845) 1:12:11.667 ********** 2026-04-05 06:26:23.900939 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-04-05 06:26:23.900957 | orchestrator | 2026-04-05 06:26:23.900973 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-05 06:26:23.900989 | orchestrator | Sunday 05 April 2026 06:25:36 +0000 (0:00:01.113) 1:12:12.781 ********** 2026-04-05 06:26:23.901032 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:26:23.901051 | orchestrator | 2026-04-05 06:26:23.901067 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-05 06:26:23.901082 | orchestrator | Sunday 05 April 2026 06:25:38 +0000 (0:00:02.165) 1:12:14.946 ********** 2026-04-05 06:26:23.901123 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:26:23.901140 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-05 06:26:23.901156 | orchestrator | 2026-04-05 06:26:23.901166 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-05 06:26:23.901175 | orchestrator | Sunday 05 April 2026 06:25:43 +0000 (0:00:05.107) 1:12:20.054 ********** 2026-04-05 06:26:23.901185 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-05 06:26:23.901194 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-05 06:26:23.901203 | orchestrator | 2026-04-05 06:26:23.901213 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-05 06:26:23.901227 | orchestrator | Sunday 05 April 2026 06:25:46 +0000 (0:00:03.129) 1:12:23.184 ********** 2026-04-05 06:26:23.901243 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:26:23.901257 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:26:23.901273 | orchestrator | 2026-04-05 06:26:23.901288 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-05 06:26:23.901322 | orchestrator | Sunday 05 April 2026 06:25:48 +0000 (0:00:01.660) 1:12:24.844 ********** 2026-04-05 06:26:23.901338 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-04-05 06:26:23.901354 | orchestrator | 2026-04-05 06:26:23.901370 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-05 06:26:23.901388 | orchestrator | Sunday 05 April 2026 06:25:49 +0000 (0:00:01.182) 1:12:26.026 ********** 2026-04-05 06:26:23.901403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901458 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.901468 | orchestrator | 2026-04-05 06:26:23.901477 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-05 06:26:23.901487 | orchestrator | Sunday 05 April 2026 06:25:50 +0000 (0:00:01.613) 1:12:27.640 ********** 2026-04-05 06:26:23.901497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-05 06:26:23.901563 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.901579 | orchestrator | 2026-04-05 06:26:23.901595 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-05 06:26:23.901610 | orchestrator | Sunday 05 April 2026 06:25:52 +0000 (0:00:01.621) 1:12:29.262 ********** 2026-04-05 06:26:23.901625 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:26:23.901642 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:26:23.901657 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:26:23.901672 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:26:23.901690 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-05 06:26:23.901706 | orchestrator | 2026-04-05 06:26:23.901724 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-05 06:26:23.901741 | orchestrator | Sunday 05 April 2026 06:26:23 +0000 (0:00:30.549) 1:12:59.812 ********** 2026-04-05 06:26:23.901758 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:26:23.901774 | orchestrator | 2026-04-05 06:26:23.901804 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-05 06:26:23.901833 | orchestrator | Sunday 05 April 2026 06:26:23 +0000 (0:00:00.796) 1:13:00.609 ********** 2026-04-05 06:27:17.260182 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:27:17.260296 | orchestrator | 2026-04-05 06:27:17.260313 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-05 06:27:17.260327 | orchestrator | Sunday 05 April 2026 06:26:24 +0000 (0:00:00.773) 1:13:01.382 ********** 2026-04-05 06:27:17.260337 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-04-05 06:27:17.260349 | orchestrator | 2026-04-05 06:27:17.260360 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-05 06:27:17.260371 | orchestrator | Sunday 05 April 2026 06:26:25 +0000 (0:00:01.161) 1:13:02.544 ********** 2026-04-05 06:27:17.260382 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-04-05 06:27:17.260393 | orchestrator | 2026-04-05 06:27:17.260403 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-05 06:27:17.260414 | orchestrator | Sunday 05 April 2026 06:26:27 +0000 (0:00:01.288) 1:13:03.833 ********** 2026-04-05 06:27:17.260425 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.260436 | orchestrator | 2026-04-05 06:27:17.260447 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-05 06:27:17.260458 | orchestrator | Sunday 05 April 2026 06:26:29 +0000 (0:00:02.053) 1:13:05.887 ********** 2026-04-05 06:27:17.260469 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.260480 | orchestrator | 2026-04-05 06:27:17.260491 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-05 06:27:17.260502 | orchestrator | Sunday 05 April 2026 06:26:31 +0000 (0:00:01.933) 1:13:07.821 ********** 2026-04-05 06:27:17.260512 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.260523 | orchestrator | 2026-04-05 06:27:17.260534 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-05 06:27:17.260545 | orchestrator | Sunday 05 April 2026 06:26:33 +0000 (0:00:02.288) 1:13:10.110 ********** 2026-04-05 06:27:17.260556 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-05 06:27:17.260568 | orchestrator | 2026-04-05 06:27:17.260579 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-04-05 06:27:17.260590 | orchestrator | skipping: no hosts matched 2026-04-05 06:27:17.260601 | orchestrator | 2026-04-05 06:27:17.260612 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-04-05 06:27:17.260627 | orchestrator | skipping: no hosts matched 2026-04-05 06:27:17.260638 | orchestrator | 2026-04-05 06:27:17.260649 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-04-05 06:27:17.260660 | orchestrator | skipping: no hosts matched 2026-04-05 06:27:17.260671 | orchestrator | 2026-04-05 06:27:17.260681 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-04-05 06:27:17.260692 | orchestrator | 2026-04-05 06:27:17.260703 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-04-05 06:27:17.260714 | orchestrator | Sunday 05 April 2026 06:26:37 +0000 (0:00:04.254) 1:13:14.364 ********** 2026-04-05 06:27:17.260728 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:27:17.260741 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:27:17.260754 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:27:17.260767 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:27:17.260779 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:27:17.260791 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:27:17.260803 | orchestrator | 2026-04-05 06:27:17.260816 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-04-05 06:27:17.260829 | orchestrator | Sunday 05 April 2026 06:26:40 +0000 (0:00:02.562) 1:13:16.927 ********** 2026-04-05 06:27:17.260842 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:27:17.260880 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:27:17.260894 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:27:17.260907 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:27:17.260919 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:27:17.260930 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:27:17.261004 | orchestrator | 2026-04-05 06:27:17.261031 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:27:17.261042 | orchestrator | Sunday 05 April 2026 06:26:44 +0000 (0:00:03.983) 1:13:20.910 ********** 2026-04-05 06:27:17.261053 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:27:17.261064 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:27:17.261074 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:27:17.261085 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:27:17.261096 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:27:17.261106 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.261118 | orchestrator | 2026-04-05 06:27:17.261129 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:27:17.261140 | orchestrator | Sunday 05 April 2026 06:26:46 +0000 (0:00:02.549) 1:13:23.460 ********** 2026-04-05 06:27:17.261150 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:27:17.261161 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:27:17.261172 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:27:17.261183 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:27:17.261193 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:27:17.261204 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.261215 | orchestrator | 2026-04-05 06:27:17.261226 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-05 06:27:17.261236 | orchestrator | Sunday 05 April 2026 06:26:49 +0000 (0:00:02.278) 1:13:25.738 ********** 2026-04-05 06:27:17.261248 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:27:17.261260 | orchestrator | 2026-04-05 06:27:17.261271 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-05 06:27:17.261282 | orchestrator | Sunday 05 April 2026 06:26:51 +0000 (0:00:02.363) 1:13:28.102 ********** 2026-04-05 06:27:17.261293 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:27:17.261304 | orchestrator | 2026-04-05 06:27:17.261331 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-05 06:27:17.261343 | orchestrator | Sunday 05 April 2026 06:26:53 +0000 (0:00:02.298) 1:13:30.401 ********** 2026-04-05 06:27:17.261354 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:27:17.261365 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:27:17.261376 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:27:17.261386 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:27:17.261397 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:27:17.261408 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:27:17.261418 | orchestrator | 2026-04-05 06:27:17.261429 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-05 06:27:17.261440 | orchestrator | Sunday 05 April 2026 06:26:55 +0000 (0:00:02.061) 1:13:32.462 ********** 2026-04-05 06:27:17.261450 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:27:17.261461 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:27:17.261472 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:27:17.261482 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:27:17.261493 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:27:17.261504 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.261514 | orchestrator | 2026-04-05 06:27:17.261525 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-05 06:27:17.261536 | orchestrator | Sunday 05 April 2026 06:26:58 +0000 (0:00:02.294) 1:13:34.757 ********** 2026-04-05 06:27:17.261547 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:27:17.261557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:27:17.261578 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:27:17.261589 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:27:17.261600 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:27:17.261610 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.261621 | orchestrator | 2026-04-05 06:27:17.261632 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-05 06:27:17.261643 | orchestrator | Sunday 05 April 2026 06:27:00 +0000 (0:00:02.279) 1:13:37.037 ********** 2026-04-05 06:27:17.261653 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:27:17.261664 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:27:17.261675 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:27:17.261685 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:27:17.261696 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:27:17.261707 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.261717 | orchestrator | 2026-04-05 06:27:17.261728 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-05 06:27:17.261739 | orchestrator | Sunday 05 April 2026 06:27:02 +0000 (0:00:02.140) 1:13:39.177 ********** 2026-04-05 06:27:17.261750 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:27:17.261760 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:27:17.261771 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:27:17.261782 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:27:17.261792 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:27:17.261803 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:27:17.261814 | orchestrator | 2026-04-05 06:27:17.261825 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-05 06:27:17.261835 | orchestrator | Sunday 05 April 2026 06:27:04 +0000 (0:00:02.477) 1:13:41.655 ********** 2026-04-05 06:27:17.261846 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:27:17.261857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:27:17.261867 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:27:17.261878 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:27:17.261889 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:27:17.261899 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:27:17.261910 | orchestrator | 2026-04-05 06:27:17.261921 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-05 06:27:17.261931 | orchestrator | Sunday 05 April 2026 06:27:06 +0000 (0:00:01.808) 1:13:43.463 ********** 2026-04-05 06:27:17.261976 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:27:17.261987 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:27:17.261997 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:27:17.262008 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:27:17.262082 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:27:17.262096 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:27:17.262106 | orchestrator | 2026-04-05 06:27:17.262123 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-05 06:27:17.262135 | orchestrator | Sunday 05 April 2026 06:27:08 +0000 (0:00:02.106) 1:13:45.570 ********** 2026-04-05 06:27:17.262145 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:27:17.262160 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:27:17.262177 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:27:17.262194 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:27:17.262213 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:27:17.262231 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.262249 | orchestrator | 2026-04-05 06:27:17.262267 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-05 06:27:17.262278 | orchestrator | Sunday 05 April 2026 06:27:11 +0000 (0:00:02.320) 1:13:47.890 ********** 2026-04-05 06:27:17.262289 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:27:17.262299 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:27:17.262310 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:27:17.262320 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:27:17.262331 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:27:17.262342 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:27:17.262361 | orchestrator | 2026-04-05 06:27:17.262372 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-05 06:27:17.262383 | orchestrator | Sunday 05 April 2026 06:27:13 +0000 (0:00:02.680) 1:13:50.571 ********** 2026-04-05 06:27:17.262394 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:27:17.262405 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:27:17.262415 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:27:17.262426 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:27:17.262436 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:27:17.262447 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:27:17.262457 | orchestrator | 2026-04-05 06:27:17.262468 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-05 06:27:17.262478 | orchestrator | Sunday 05 April 2026 06:27:16 +0000 (0:00:02.273) 1:13:52.845 ********** 2026-04-05 06:27:17.262489 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:27:17.262500 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:27:17.262510 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:27:17.262521 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:27:17.262532 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:27:17.262542 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:27:17.262553 | orchestrator | 2026-04-05 06:27:17.262572 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-05 06:28:14.963943 | orchestrator | Sunday 05 April 2026 06:27:18 +0000 (0:00:02.137) 1:13:54.983 ********** 2026-04-05 06:28:14.964060 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.964077 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.964090 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.964101 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.964113 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.964124 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.964135 | orchestrator | 2026-04-05 06:28:14.964147 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-05 06:28:14.964159 | orchestrator | Sunday 05 April 2026 06:27:20 +0000 (0:00:01.814) 1:13:56.797 ********** 2026-04-05 06:28:14.964170 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.964181 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.964192 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.964203 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.964214 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.964225 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.964235 | orchestrator | 2026-04-05 06:28:14.964247 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-05 06:28:14.964259 | orchestrator | Sunday 05 April 2026 06:27:21 +0000 (0:00:01.887) 1:13:58.685 ********** 2026-04-05 06:28:14.964270 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.964281 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.964292 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.964303 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.964314 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.964325 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.964336 | orchestrator | 2026-04-05 06:28:14.964347 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-05 06:28:14.964359 | orchestrator | Sunday 05 April 2026 06:27:24 +0000 (0:00:02.250) 1:14:00.935 ********** 2026-04-05 06:28:14.964370 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.964381 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.964391 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.964402 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:28:14.964413 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:28:14.964424 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:28:14.964436 | orchestrator | 2026-04-05 06:28:14.964448 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-05 06:28:14.964461 | orchestrator | Sunday 05 April 2026 06:27:26 +0000 (0:00:01.823) 1:14:02.759 ********** 2026-04-05 06:28:14.964503 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.964516 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.964528 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.964541 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:28:14.964555 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:28:14.964567 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:28:14.964579 | orchestrator | 2026-04-05 06:28:14.964592 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-05 06:28:14.964604 | orchestrator | Sunday 05 April 2026 06:27:28 +0000 (0:00:02.148) 1:14:04.908 ********** 2026-04-05 06:28:14.964617 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.964630 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.964641 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.964651 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:28:14.964663 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:28:14.964673 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:28:14.964684 | orchestrator | 2026-04-05 06:28:14.964695 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-05 06:28:14.964705 | orchestrator | Sunday 05 April 2026 06:27:29 +0000 (0:00:01.788) 1:14:06.696 ********** 2026-04-05 06:28:14.964716 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.964727 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.964738 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.964748 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.964759 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.964769 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.964780 | orchestrator | 2026-04-05 06:28:14.964805 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-05 06:28:14.964816 | orchestrator | Sunday 05 April 2026 06:27:32 +0000 (0:00:02.185) 1:14:08.882 ********** 2026-04-05 06:28:14.964827 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.964837 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.964848 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.964858 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.964869 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.964907 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.964918 | orchestrator | 2026-04-05 06:28:14.964929 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-05 06:28:14.964940 | orchestrator | Sunday 05 April 2026 06:27:34 +0000 (0:00:02.292) 1:14:11.175 ********** 2026-04-05 06:28:14.964951 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.964961 | orchestrator | 2026-04-05 06:28:14.964972 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-05 06:28:14.964983 | orchestrator | Sunday 05 April 2026 06:27:37 +0000 (0:00:03.161) 1:14:14.337 ********** 2026-04-05 06:28:14.964993 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.965004 | orchestrator | 2026-04-05 06:28:14.965014 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-05 06:28:14.965025 | orchestrator | Sunday 05 April 2026 06:27:40 +0000 (0:00:03.046) 1:14:17.383 ********** 2026-04-05 06:28:14.965036 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.965046 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.965057 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.965067 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.965078 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.965088 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.965099 | orchestrator | 2026-04-05 06:28:14.965110 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-05 06:28:14.965120 | orchestrator | Sunday 05 April 2026 06:27:43 +0000 (0:00:02.627) 1:14:20.011 ********** 2026-04-05 06:28:14.965131 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.965141 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.965152 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.965162 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.965173 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.965183 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.965204 | orchestrator | 2026-04-05 06:28:14.965215 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-05 06:28:14.965243 | orchestrator | Sunday 05 April 2026 06:27:45 +0000 (0:00:02.612) 1:14:22.623 ********** 2026-04-05 06:28:14.965256 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:28:14.965268 | orchestrator | 2026-04-05 06:28:14.965278 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-05 06:28:14.965289 | orchestrator | Sunday 05 April 2026 06:27:48 +0000 (0:00:02.686) 1:14:25.310 ********** 2026-04-05 06:28:14.965300 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.965311 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.965321 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.965332 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:28:14.965342 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:28:14.965353 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:28:14.965363 | orchestrator | 2026-04-05 06:28:14.965374 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-05 06:28:14.965385 | orchestrator | Sunday 05 April 2026 06:27:51 +0000 (0:00:02.740) 1:14:28.050 ********** 2026-04-05 06:28:14.965396 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:28:14.965406 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:28:14.965417 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:28:14.965427 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:28:14.965438 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:28:14.965448 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:28:14.965459 | orchestrator | 2026-04-05 06:28:14.965470 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-04-05 06:28:14.965480 | orchestrator | 2026-04-05 06:28:14.965491 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:28:14.965651 | orchestrator | Sunday 05 April 2026 06:27:56 +0000 (0:00:04.670) 1:14:32.721 ********** 2026-04-05 06:28:14.965669 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.965680 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.965690 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.965701 | orchestrator | 2026-04-05 06:28:14.965712 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:28:14.965722 | orchestrator | Sunday 05 April 2026 06:27:57 +0000 (0:00:01.697) 1:14:34.419 ********** 2026-04-05 06:28:14.965733 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.965744 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:28:14.965754 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:28:14.965765 | orchestrator | 2026-04-05 06:28:14.965775 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-05 06:28:14.965787 | orchestrator | Sunday 05 April 2026 06:27:59 +0000 (0:00:01.759) 1:14:36.178 ********** 2026-04-05 06:28:14.965798 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:28:14.965808 | orchestrator | 2026-04-05 06:28:14.965819 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-05 06:28:14.965830 | orchestrator | Sunday 05 April 2026 06:28:01 +0000 (0:00:02.337) 1:14:38.516 ********** 2026-04-05 06:28:14.965841 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.965852 | orchestrator | 2026-04-05 06:28:14.965862 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-04-05 06:28:14.965962 | orchestrator | 2026-04-05 06:28:14.965983 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-04-05 06:28:14.965994 | orchestrator | Sunday 05 April 2026 06:28:04 +0000 (0:00:02.393) 1:14:40.909 ********** 2026-04-05 06:28:14.966005 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.966075 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.966087 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.966097 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:28:14.966108 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:28:14.966130 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:28:14.966140 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:28:14.966151 | orchestrator | 2026-04-05 06:28:14.966162 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:28:14.966181 | orchestrator | Sunday 05 April 2026 06:28:06 +0000 (0:00:02.059) 1:14:42.969 ********** 2026-04-05 06:28:14.966192 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.966202 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.966213 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.966223 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:28:14.966234 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:28:14.966244 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:28:14.966255 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:28:14.966266 | orchestrator | 2026-04-05 06:28:14.966276 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-05 06:28:14.966287 | orchestrator | Sunday 05 April 2026 06:28:08 +0000 (0:00:02.466) 1:14:45.436 ********** 2026-04-05 06:28:14.966298 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.966308 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.966319 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.966329 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:28:14.966340 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:28:14.966350 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:28:14.966361 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:28:14.966372 | orchestrator | 2026-04-05 06:28:14.966382 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-05 06:28:14.966393 | orchestrator | Sunday 05 April 2026 06:28:11 +0000 (0:00:02.663) 1:14:48.099 ********** 2026-04-05 06:28:14.966404 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.966414 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.966424 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.966435 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:28:14.966445 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:28:14.966456 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:28:14.966466 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:28:14.966477 | orchestrator | 2026-04-05 06:28:14.966487 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-04-05 06:28:14.966498 | orchestrator | Sunday 05 April 2026 06:28:13 +0000 (0:00:02.550) 1:14:50.649 ********** 2026-04-05 06:28:14.966508 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:28:14.966519 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:28:14.966529 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:28:14.966550 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:29:03.449384 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:29:03.449499 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:29:03.449515 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449528 | orchestrator | 2026-04-05 06:29:03.449540 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-04-05 06:29:03.449552 | orchestrator | 2026-04-05 06:29:03.449563 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-04-05 06:29:03.449574 | orchestrator | Sunday 05 April 2026 06:28:17 +0000 (0:00:03.803) 1:14:54.453 ********** 2026-04-05 06:29:03.449586 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-04-05 06:29:03.449598 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-04-05 06:29:03.449608 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-04-05 06:29:03.449619 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449630 | orchestrator | 2026-04-05 06:29:03.449641 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-05 06:29:03.449652 | orchestrator | Sunday 05 April 2026 06:28:18 +0000 (0:00:01.121) 1:14:55.574 ********** 2026-04-05 06:29:03.449663 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449674 | orchestrator | 2026-04-05 06:29:03.449706 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-05 06:29:03.449718 | orchestrator | Sunday 05 April 2026 06:28:20 +0000 (0:00:01.183) 1:14:56.758 ********** 2026-04-05 06:29:03.449729 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449740 | orchestrator | 2026-04-05 06:29:03.449750 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-05 06:29:03.449761 | orchestrator | Sunday 05 April 2026 06:28:21 +0000 (0:00:01.170) 1:14:57.929 ********** 2026-04-05 06:29:03.449772 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449783 | orchestrator | 2026-04-05 06:29:03.449794 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-05 06:29:03.449804 | orchestrator | Sunday 05 April 2026 06:28:22 +0000 (0:00:01.129) 1:14:59.059 ********** 2026-04-05 06:29:03.449815 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449858 | orchestrator | 2026-04-05 06:29:03.449870 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-04-05 06:29:03.449881 | orchestrator | Sunday 05 April 2026 06:28:23 +0000 (0:00:01.125) 1:15:00.184 ********** 2026-04-05 06:29:03.449892 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-04-05 06:29:03.449903 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-04-05 06:29:03.449914 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449924 | orchestrator | 2026-04-05 06:29:03.449935 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-04-05 06:29:03.449946 | orchestrator | Sunday 05 April 2026 06:28:24 +0000 (0:00:01.153) 1:15:01.338 ********** 2026-04-05 06:29:03.449956 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.449967 | orchestrator | 2026-04-05 06:29:03.449978 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-04-05 06:29:03.449988 | orchestrator | Sunday 05 April 2026 06:28:25 +0000 (0:00:01.328) 1:15:02.666 ********** 2026-04-05 06:29:03.449999 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.450010 | orchestrator | 2026-04-05 06:29:03.450077 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-04-05 06:29:03.450089 | orchestrator | Sunday 05 April 2026 06:28:27 +0000 (0:00:01.112) 1:15:03.779 ********** 2026-04-05 06:29:03.450100 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.450111 | orchestrator | 2026-04-05 06:29:03.450121 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-04-05 06:29:03.450132 | orchestrator | Sunday 05 April 2026 06:28:28 +0000 (0:00:01.127) 1:15:04.906 ********** 2026-04-05 06:29:03.450143 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-04-05 06:29:03.450168 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-04-05 06:29:03.450179 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.450190 | orchestrator | 2026-04-05 06:29:03.450200 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-04-05 06:29:03.450211 | orchestrator | Sunday 05 April 2026 06:28:29 +0000 (0:00:01.150) 1:15:06.057 ********** 2026-04-05 06:29:03.450222 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.450232 | orchestrator | 2026-04-05 06:29:03.450243 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-04-05 06:29:03.450254 | orchestrator | Sunday 05 April 2026 06:28:30 +0000 (0:00:01.231) 1:15:07.289 ********** 2026-04-05 06:29:03.450265 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.450275 | orchestrator | 2026-04-05 06:29:03.450286 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-04-05 06:29:03.450297 | orchestrator | Sunday 05 April 2026 06:28:31 +0000 (0:00:01.152) 1:15:08.441 ********** 2026-04-05 06:29:03.450308 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.450318 | orchestrator | 2026-04-05 06:29:03.450329 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-04-05 06:29:03.450340 | orchestrator | Sunday 05 April 2026 06:28:32 +0000 (0:00:01.117) 1:15:09.559 ********** 2026-04-05 06:29:03.450360 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:03.450371 | orchestrator | 2026-04-05 06:29:03.450382 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-04-05 06:29:03.450392 | orchestrator | 2026-04-05 06:29:03.450403 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-05 06:29:03.450414 | orchestrator | Sunday 05 April 2026 06:28:34 +0000 (0:00:02.126) 1:15:11.686 ********** 2026-04-05 06:29:03.450425 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:29:03.450436 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:29:03.450446 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:29:03.450457 | orchestrator | 2026-04-05 06:29:03.450468 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-05 06:29:03.450478 | orchestrator | Sunday 05 April 2026 06:28:36 +0000 (0:00:01.452) 1:15:13.138 ********** 2026-04-05 06:29:03.450489 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:29:03.450500 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:29:03.450530 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:29:03.450542 | orchestrator | 2026-04-05 06:29:03.450553 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-05 06:29:03.450564 | orchestrator | Sunday 05 April 2026 06:28:37 +0000 (0:00:01.375) 1:15:14.514 ********** 2026-04-05 06:29:03.450575 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:29:03.450585 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:29:03.450596 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:29:03.450607 | orchestrator | 2026-04-05 06:29:03.450617 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-05 06:29:03.450628 | orchestrator | Sunday 05 April 2026 06:28:39 +0000 (0:00:01.872) 1:15:16.387 ********** 2026-04-05 06:29:03.450639 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:29:03.450650 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:29:03.450661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:29:03.450671 | orchestrator | 2026-04-05 06:29:03.450682 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-05 06:29:03.450693 | orchestrator | Sunday 05 April 2026 06:28:41 +0000 (0:00:01.507) 1:15:17.894 ********** 2026-04-05 06:29:03.450703 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:29:03.450714 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:29:03.450725 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:29:03.450735 | orchestrator | 2026-04-05 06:29:03.450746 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-04-05 06:29:03.450756 | orchestrator | Sunday 05 April 2026 06:28:42 +0000 (0:00:01.539) 1:15:19.434 ********** 2026-04-05 06:29:03.450767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:29:03.450778 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:29:03.450788 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:29:03.450799 | orchestrator | 2026-04-05 06:29:03.450810 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-04-05 06:29:03.450842 | orchestrator | Sunday 05 April 2026 06:28:44 +0000 (0:00:01.544) 1:15:20.979 ********** 2026-04-05 06:29:03.450855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:29:03.450866 | orchestrator | 2026-04-05 06:29:03.450876 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-04-05 06:29:03.450887 | orchestrator | 2026-04-05 06:29:03.450898 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-05 06:29:03.450909 | orchestrator | Sunday 05 April 2026 06:28:45 +0000 (0:00:01.664) 1:15:22.643 ********** 2026-04-05 06:29:03.450920 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.450931 | orchestrator | 2026-04-05 06:29:03.450941 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-05 06:29:03.450952 | orchestrator | Sunday 05 April 2026 06:28:47 +0000 (0:00:01.489) 1:15:24.133 ********** 2026-04-05 06:29:03.450963 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.450974 | orchestrator | 2026-04-05 06:29:03.450984 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-04-05 06:29:03.451026 | orchestrator | Sunday 05 April 2026 06:28:48 +0000 (0:00:01.141) 1:15:25.274 ********** 2026-04-05 06:29:03.451071 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.451091 | orchestrator | 2026-04-05 06:29:03.451108 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-04-05 06:29:03.451126 | orchestrator | Sunday 05 April 2026 06:28:49 +0000 (0:00:01.173) 1:15:26.447 ********** 2026-04-05 06:29:03.451144 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.451161 | orchestrator | 2026-04-05 06:29:03.451179 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-04-05 06:29:03.451197 | orchestrator | Sunday 05 April 2026 06:28:52 +0000 (0:00:02.956) 1:15:29.404 ********** 2026-04-05 06:29:03.451216 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.451234 | orchestrator | 2026-04-05 06:29:03.451252 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-04-05 06:29:03.451271 | orchestrator | Sunday 05 April 2026 06:28:56 +0000 (0:00:03.591) 1:15:32.996 ********** 2026-04-05 06:29:03.451299 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:29:03.451318 | orchestrator | 2026-04-05 06:29:03.451331 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-04-05 06:29:03.451341 | orchestrator | 2026-04-05 06:29:03.451352 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-04-05 06:29:03.451363 | orchestrator | Sunday 05 April 2026 06:28:58 +0000 (0:00:02.129) 1:15:35.125 ********** 2026-04-05 06:29:03.451374 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.451384 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:29:03.451395 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:29:03.451406 | orchestrator | 2026-04-05 06:29:03.451417 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-04-05 06:29:03.451428 | orchestrator | Sunday 05 April 2026 06:28:59 +0000 (0:00:01.588) 1:15:36.714 ********** 2026-04-05 06:29:03.451439 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.451450 | orchestrator | 2026-04-05 06:29:03.451461 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-04-05 06:29:03.451471 | orchestrator | Sunday 05 April 2026 06:29:02 +0000 (0:00:02.284) 1:15:38.999 ********** 2026-04-05 06:29:03.451482 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:29:03.451493 | orchestrator | 2026-04-05 06:29:03.451504 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 06:29:03.451516 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 06:29:03.451528 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-04-05 06:29:03.451541 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-04-05 06:29:03.451552 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-04-05 06:29:03.451574 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-04-05 06:29:06.895237 | orchestrator | testbed-node-3 : ok=311  changed=21  unreachable=0 failed=0 skipped=341  rescued=0 ignored=0 2026-04-05 06:29:06.895336 | orchestrator | testbed-node-4 : ok=308  changed=16  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-04-05 06:29:06.895352 | orchestrator | testbed-node-5 : ok=308  changed=17  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-04-05 06:29:06.895364 | orchestrator | 2026-04-05 06:29:06.895376 | orchestrator | 2026-04-05 06:29:06.895387 | orchestrator | 2026-04-05 06:29:06.895426 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 06:29:06.895438 | orchestrator | Sunday 05 April 2026 06:29:05 +0000 (0:00:03.701) 1:15:42.700 ********** 2026-04-05 06:29:06.895450 | orchestrator | =============================================================================== 2026-04-05 06:29:06.895460 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 74.93s 2026-04-05 06:29:06.895471 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 72.77s 2026-04-05 06:29:06.895482 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.21s 2026-04-05 06:29:06.895493 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.20s 2026-04-05 06:29:06.895503 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.65s 2026-04-05 06:29:06.895514 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.55s 2026-04-05 06:29:06.895525 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.20s 2026-04-05 06:29:06.895535 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 27.11s 2026-04-05 06:29:06.895546 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.08s 2026-04-05 06:29:06.895557 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.97s 2026-04-05 06:29:06.895568 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.25s 2026-04-05 06:29:06.895578 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.63s 2026-04-05 06:29:06.895589 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.49s 2026-04-05 06:29:06.895600 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.19s 2026-04-05 06:29:06.895610 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.21s 2026-04-05 06:29:06.895621 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.77s 2026-04-05 06:29:06.895632 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.62s 2026-04-05 06:29:06.895643 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.16s 2026-04-05 06:29:06.895653 | orchestrator | Stop standby ceph mds -------------------------------------------------- 10.27s 2026-04-05 06:29:06.895664 | orchestrator | Stop ceph mon ---------------------------------------------------------- 10.25s 2026-04-05 06:29:07.125711 | orchestrator | + osism apply cephclient 2026-04-05 06:29:08.506640 | orchestrator | 2026-04-05 06:29:08 | INFO  | Prepare task for execution of cephclient. 2026-04-05 06:29:08.578181 | orchestrator | 2026-04-05 06:29:08 | INFO  | Task eee77b89-80f5-4da6-9700-b7e131970266 (cephclient) was prepared for execution. 2026-04-05 06:29:08.578274 | orchestrator | 2026-04-05 06:29:08 | INFO  | It takes a moment until task eee77b89-80f5-4da6-9700-b7e131970266 (cephclient) has been started and output is visible here. 2026-04-05 06:29:38.366396 | orchestrator | 2026-04-05 06:29:38.366538 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-05 06:29:38.366566 | orchestrator | 2026-04-05 06:29:38.366588 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-05 06:29:38.366606 | orchestrator | Sunday 05 April 2026 06:29:16 +0000 (0:00:03.472) 0:00:03.473 ********** 2026-04-05 06:29:38.366628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-05 06:29:38.366649 | orchestrator | 2026-04-05 06:29:38.366668 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-05 06:29:38.366689 | orchestrator | Sunday 05 April 2026 06:29:17 +0000 (0:00:01.915) 0:00:05.388 ********** 2026-04-05 06:29:38.366709 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-05 06:29:38.366730 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-05 06:29:38.366837 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-05 06:29:38.366864 | orchestrator | 2026-04-05 06:29:38.366882 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-05 06:29:38.366903 | orchestrator | Sunday 05 April 2026 06:29:20 +0000 (0:00:02.640) 0:00:08.028 ********** 2026-04-05 06:29:38.366925 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-05 06:29:38.366944 | orchestrator | 2026-04-05 06:29:38.366964 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-05 06:29:38.366982 | orchestrator | Sunday 05 April 2026 06:29:22 +0000 (0:00:02.077) 0:00:10.106 ********** 2026-04-05 06:29:38.367002 | orchestrator | ok: [testbed-manager] 2026-04-05 06:29:38.367021 | orchestrator | 2026-04-05 06:29:38.367041 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-05 06:29:38.367061 | orchestrator | Sunday 05 April 2026 06:29:24 +0000 (0:00:01.944) 0:00:12.050 ********** 2026-04-05 06:29:38.367079 | orchestrator | ok: [testbed-manager] 2026-04-05 06:29:38.367097 | orchestrator | 2026-04-05 06:29:38.367116 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-05 06:29:38.367135 | orchestrator | Sunday 05 April 2026 06:29:26 +0000 (0:00:01.906) 0:00:13.957 ********** 2026-04-05 06:29:38.367153 | orchestrator | ok: [testbed-manager] 2026-04-05 06:29:38.367172 | orchestrator | 2026-04-05 06:29:38.367193 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-05 06:29:38.367212 | orchestrator | Sunday 05 April 2026 06:29:28 +0000 (0:00:02.265) 0:00:16.222 ********** 2026-04-05 06:29:38.367231 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-05 06:29:38.367250 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-04-05 06:29:38.367269 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-05 06:29:38.367288 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-05 06:29:38.367307 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-05 06:29:38.367326 | orchestrator | 2026-04-05 06:29:38.367346 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-05 06:29:38.367364 | orchestrator | Sunday 05 April 2026 06:29:33 +0000 (0:00:05.012) 0:00:21.235 ********** 2026-04-05 06:29:38.367385 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-05 06:29:38.367406 | orchestrator | 2026-04-05 06:29:38.367426 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-05 06:29:38.367445 | orchestrator | Sunday 05 April 2026 06:29:35 +0000 (0:00:01.527) 0:00:22.763 ********** 2026-04-05 06:29:38.367464 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:38.367482 | orchestrator | 2026-04-05 06:29:38.367502 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-05 06:29:38.367522 | orchestrator | Sunday 05 April 2026 06:29:36 +0000 (0:00:01.122) 0:00:23.885 ********** 2026-04-05 06:29:38.367542 | orchestrator | skipping: [testbed-manager] 2026-04-05 06:29:38.367562 | orchestrator | 2026-04-05 06:29:38.367582 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 06:29:38.367603 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 06:29:38.367623 | orchestrator | 2026-04-05 06:29:38.367642 | orchestrator | 2026-04-05 06:29:38.367662 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 06:29:38.367680 | orchestrator | Sunday 05 April 2026 06:29:37 +0000 (0:00:01.554) 0:00:25.440 ********** 2026-04-05 06:29:38.367700 | orchestrator | =============================================================================== 2026-04-05 06:29:38.367720 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 5.01s 2026-04-05 06:29:38.367739 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.64s 2026-04-05 06:29:38.367757 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.27s 2026-04-05 06:29:38.367774 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.08s 2026-04-05 06:29:38.367848 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.94s 2026-04-05 06:29:38.367868 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.92s 2026-04-05 06:29:38.367885 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.91s 2026-04-05 06:29:38.367902 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.55s 2026-04-05 06:29:38.367940 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.53s 2026-04-05 06:29:38.367960 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.12s 2026-04-05 06:29:38.582111 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-05 06:29:38.582210 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-04-05 06:29:38.589045 | orchestrator | + set -e 2026-04-05 06:29:38.589108 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 06:29:38.589123 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 06:29:38.589134 | orchestrator | ++ INTERACTIVE=false 2026-04-05 06:29:38.589145 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 06:29:38.589156 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 06:29:38.589166 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 06:29:38.589177 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 06:29:38.589188 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 06:29:38.589198 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 06:29:38.589209 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 06:29:38.589220 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 06:29:38.589230 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 06:29:38.589241 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 06:29:38.589252 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 06:29:38.589263 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 06:29:38.589273 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 06:29:38.589284 | orchestrator | ++ export ARA=false 2026-04-05 06:29:38.589295 | orchestrator | ++ ARA=false 2026-04-05 06:29:38.589306 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 06:29:38.589316 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 06:29:38.589328 | orchestrator | ++ export TEMPEST=false 2026-04-05 06:29:38.589339 | orchestrator | ++ TEMPEST=false 2026-04-05 06:29:38.589350 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 06:29:38.589360 | orchestrator | ++ IS_ZUUL=true 2026-04-05 06:29:38.589371 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 06:29:38.589382 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 06:29:38.589393 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 06:29:38.589403 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 06:29:38.589414 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 06:29:38.589424 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 06:29:38.589435 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 06:29:38.589445 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 06:29:38.589456 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 06:29:38.589467 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 06:29:38.589477 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-05 06:29:38.589488 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-05 06:29:38.589498 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 06:29:38.590140 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 06:29:38.596624 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 06:29:38.596676 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 06:29:38.596691 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 06:29:38.596702 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-04-05 06:29:47.829647 | orchestrator | 2026-04-05 06:29:47 | ERROR  | Unable to get ansible vault password 2026-04-05 06:29:47.829757 | orchestrator | 2026-04-05 06:29:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 06:29:47.829820 | orchestrator | 2026-04-05 06:29:47 | ERROR  | Dropping encrypted entries 2026-04-05 06:29:47.865663 | orchestrator | 2026-04-05 06:29:47 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 06:29:47.866409 | orchestrator | 2026-04-05 06:29:47 | INFO  | Kolla configuration check passed 2026-04-05 06:29:48.083281 | orchestrator | 2026-04-05 06:29:48 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-04-05 06:29:48.099843 | orchestrator | 2026-04-05 06:29:48 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-04-05 06:29:48.376113 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-05 06:29:54.860573 | orchestrator | 2026-04-05 06:29:54 | ERROR  | Unable to get ansible vault password 2026-04-05 06:29:54.860693 | orchestrator | 2026-04-05 06:29:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 06:29:54.860711 | orchestrator | 2026-04-05 06:29:54 | ERROR  | Dropping encrypted entries 2026-04-05 06:29:54.899140 | orchestrator | 2026-04-05 06:29:54 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 06:29:55.039367 | orchestrator | 2026-04-05 06:29:55 | INFO  | Found 207 classic queue(s) in vhost '/': 2026-04-05 06:29:55.039658 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-04-05 06:29:55.039680 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-04-05 06:29:55.039692 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-04-05 06:29:55.039704 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-04-05 06:29:55.039729 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - barbican.workers_fanout_18b3fae6b5b5421e97ec3da19073b569 (vhost: /, messages: 0) 2026-04-05 06:29:55.039743 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - barbican.workers_fanout_4ddc118d8202414da266bb205a703361 (vhost: /, messages: 0) 2026-04-05 06:29:55.042284 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - barbican.workers_fanout_d74481b6d2214dab8155a19eedc7479c (vhost: /, messages: 0) 2026-04-05 06:29:55.042314 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-04-05 06:29:55.042326 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central (vhost: /, messages: 0) 2026-04-05 06:29:55.042422 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.042438 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.042449 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.042460 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central_fanout_383abf8f2f994cce8dbe19fe95774187 (vhost: /, messages: 0) 2026-04-05 06:29:55.042471 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central_fanout_618192fe4c984e8f9d5f8f5453fdf454 (vhost: /, messages: 0) 2026-04-05 06:29:55.042483 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central_fanout_69eb92e2080c496e98dd0ae944b48875 (vhost: /, messages: 0) 2026-04-05 06:29:55.042494 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central_fanout_7113c27131b049eb9a2ce248a95af3e8 (vhost: /, messages: 0) 2026-04-05 06:29:55.042505 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central_fanout_c50ebf563a244b5bae4111f58f5330cf (vhost: /, messages: 0) 2026-04-05 06:29:55.042516 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - central_fanout_ed2957344d104be8a65f4d5814f9e42e (vhost: /, messages: 0) 2026-04-05 06:29:55.042527 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-04-05 06:29:55.042539 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.042576 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.042588 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.042599 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-backup_fanout_8aa0ab4d405e4c00ab6db69a9c9e977f (vhost: /, messages: 0) 2026-04-05 06:29:55.042610 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-backup_fanout_a826f2c5deef4d078e4a72a5709acaf3 (vhost: /, messages: 0) 2026-04-05 06:29:55.042620 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-backup_fanout_b6f5f1ad86bd45e98386fcf4964885c0 (vhost: /, messages: 0) 2026-04-05 06:29:55.042632 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-04-05 06:29:55.042649 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.042660 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.042672 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.042813 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-scheduler_fanout_05257801e9394e9197abde2465c1f8d4 (vhost: /, messages: 0) 2026-04-05 06:29:55.043124 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-scheduler_fanout_296d812cee2c4eb98de849e93302d13e (vhost: /, messages: 0) 2026-04-05 06:29:55.043394 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-scheduler_fanout_e939052aa5fa4222a51963131d0ad656 (vhost: /, messages: 0) 2026-04-05 06:29:55.043415 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-04-05 06:29:55.043443 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-04-05 06:29:55.043462 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.043811 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_98f846289b7046dd8f722c90674dab1c (vhost: /, messages: 0) 2026-04-05 06:29:55.043840 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-04-05 06:29:55.043972 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.044002 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_d3e31f2ad4974ce39d63924ee285336d (vhost: /, messages: 0) 2026-04-05 06:29:55.044022 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-04-05 06:29:55.044056 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.044074 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_6706e895b9e94e83818787eca5bb21b4 (vhost: /, messages: 0) 2026-04-05 06:29:55.044203 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume_fanout_584474dd348944f5ad3ada220b6c5ca5 (vhost: /, messages: 0) 2026-04-05 06:29:55.044223 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume_fanout_a2349b320b994c8499335dc559b2d2c3 (vhost: /, messages: 0) 2026-04-05 06:29:55.044467 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - cinder-volume_fanout_eb8e96cde2a4499d857ae685721ba1ee (vhost: /, messages: 0) 2026-04-05 06:29:55.044512 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - compute (vhost: /, messages: 0) 2026-04-05 06:29:55.044534 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-04-05 06:29:55.044554 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-04-05 06:29:55.044574 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-04-05 06:29:55.044594 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - compute_fanout_b005076a2d2b465e8242ab7efcd08048 (vhost: /, messages: 0) 2026-04-05 06:29:55.045062 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - compute_fanout_c584f73b599549bd915e512f91f40c58 (vhost: /, messages: 0) 2026-04-05 06:29:55.045093 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - compute_fanout_f91ef7ae45454b7f9c48cc681008e0de (vhost: /, messages: 0) 2026-04-05 06:29:55.045103 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor (vhost: /, messages: 0) 2026-04-05 06:29:55.045114 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.045240 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.045325 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.045586 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor_fanout_09be5df6aec842e799ada5dc6dd4b42b (vhost: /, messages: 0) 2026-04-05 06:29:55.045671 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor_fanout_3b337532ba62418d9ef2b00b79bf1727 (vhost: /, messages: 0) 2026-04-05 06:29:55.045975 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor_fanout_3d8b35bfab694c46879d9111d69e3335 (vhost: /, messages: 0) 2026-04-05 06:29:55.045995 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor_fanout_4715d9520b64424bbd05bbc06fab3885 (vhost: /, messages: 0) 2026-04-05 06:29:55.046172 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor_fanout_b4a9382d20ce4b73bdb7a7ee041bc2c6 (vhost: /, messages: 0) 2026-04-05 06:29:55.046306 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - conductor_fanout_f81b52af7a4b49d2bc831503c7e16cdb (vhost: /, messages: 0) 2026-04-05 06:29:55.046481 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - event.sample (vhost: /, messages: 4) 2026-04-05 06:29:55.046619 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-05 06:29:55.046741 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor.7ristaccxjw2 (vhost: /, messages: 0) 2026-04-05 06:29:55.046896 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor.pwl742r3p6gf (vhost: /, messages: 0) 2026-04-05 06:29:55.047076 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor.ytinpazuiaaz (vhost: /, messages: 0) 2026-04-05 06:29:55.047461 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_013aea6961e148de939216aebc1aea8f (vhost: /, messages: 0) 2026-04-05 06:29:55.047480 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_37125cf6dd734863b0fed6dc3045a5a3 (vhost: /, messages: 0) 2026-04-05 06:29:55.047715 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_735461c06f214014a111a40e0d999058 (vhost: /, messages: 0) 2026-04-05 06:29:55.047734 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_7d980fdfb1ab4fa6a6ceaa55f4c071f9 (vhost: /, messages: 0) 2026-04-05 06:29:55.047744 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_7db7ce94ef8c4e1bbf1b0f4053e347f9 (vhost: /, messages: 0) 2026-04-05 06:29:55.047869 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_7f9fd00ef4b3425f8ae8e286f4aa7bb8 (vhost: /, messages: 0) 2026-04-05 06:29:55.048084 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_a55b0a482f774f11b32bff88a473874e (vhost: /, messages: 0) 2026-04-05 06:29:55.048100 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_d5dedb15ef3a412082262ab047959cfa (vhost: /, messages: 0) 2026-04-05 06:29:55.048357 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - magnum-conductor_fanout_df7d0e19aae04ad482ce69521ae3a8f7 (vhost: /, messages: 0) 2026-04-05 06:29:55.048372 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-04-05 06:29:55.048380 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.048509 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.048635 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.048752 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-data_fanout_1c3827348c49475eaedf24f5af518518 (vhost: /, messages: 0) 2026-04-05 06:29:55.048936 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-data_fanout_212822918f9e4e8eb507bc0c138b4137 (vhost: /, messages: 0) 2026-04-05 06:29:55.049059 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-data_fanout_dd3eaa5222e744749ebb7ef97ca22124 (vhost: /, messages: 0) 2026-04-05 06:29:55.049307 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-04-05 06:29:55.049485 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.049601 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.050950 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.050967 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-scheduler_fanout_5d777f493db7426581d743b77fa485f4 (vhost: /, messages: 0) 2026-04-05 06:29:55.050975 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-scheduler_fanout_68433b46dcac407ea47660d6b34848d4 (vhost: /, messages: 0) 2026-04-05 06:29:55.050982 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-scheduler_fanout_f09657193ac74ff88de673282dc1e856 (vhost: /, messages: 0) 2026-04-05 06:29:55.050989 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-04-05 06:29:55.050996 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-04-05 06:29:55.051003 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-04-05 06:29:55.051009 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-04-05 06:29:55.051016 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-share_fanout_03dbe29bbda1476290e4cc7fcf03cc4c (vhost: /, messages: 0) 2026-04-05 06:29:55.051023 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-share_fanout_b50e428a08f04c24b71ced18f8c3bd38 (vhost: /, messages: 0) 2026-04-05 06:29:55.051030 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - manila-share_fanout_fbaff35712e444ab89d1cfcf2af7b4b3 (vhost: /, messages: 0) 2026-04-05 06:29:55.051036 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-04-05 06:29:55.051053 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-04-05 06:29:55.051060 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-04-05 06:29:55.051066 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-04-05 06:29:55.051267 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-04-05 06:29:55.051280 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-04-05 06:29:55.051287 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-04-05 06:29:55.051341 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-04-05 06:29:55.051572 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.051591 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.051691 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.051891 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - octavia_provisioning_v2_fanout_1d16de4fe59c4407b195ad9d83358cb2 (vhost: /, messages: 0) 2026-04-05 06:29:55.052152 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - octavia_provisioning_v2_fanout_56273dee4fbc4c6e80086c97f1898c58 (vhost: /, messages: 0) 2026-04-05 06:29:55.052172 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - octavia_provisioning_v2_fanout_82302d6b1447470d844adb74f014144b (vhost: /, messages: 0) 2026-04-05 06:29:55.052297 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer (vhost: /, messages: 0) 2026-04-05 06:29:55.052355 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.052365 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.052752 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.052783 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer_fanout_37d7250190064fbeb7141ea09ebad6fe (vhost: /, messages: 0) 2026-04-05 06:29:55.053059 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer_fanout_7ac108ae955243c5934e01feca93ea73 (vhost: /, messages: 0) 2026-04-05 06:29:55.053118 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer_fanout_8109401e79994bd399597e4538316791 (vhost: /, messages: 0) 2026-04-05 06:29:55.053127 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer_fanout_a8bfe0d57eed4c05bd2dd1bc5af5fe1d (vhost: /, messages: 0) 2026-04-05 06:29:55.053311 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer_fanout_b5b7a4c87010407ab2092c7b7688b6f4 (vhost: /, messages: 0) 2026-04-05 06:29:55.053324 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - producer_fanout_da99d47fff2d4411bc44c37e937245a0 (vhost: /, messages: 0) 2026-04-05 06:29:55.053506 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-04-05 06:29:55.053519 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.054441 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.054667 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.054711 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_1c8a1c1ed4fa4debb39914da8ceaa182 (vhost: /, messages: 0) 2026-04-05 06:29:55.054725 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_2b1e3c5acce041b5a96bcc64510d266a (vhost: /, messages: 0) 2026-04-05 06:29:55.054737 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_44791d549ef94fb992c202a3c6d939a9 (vhost: /, messages: 0) 2026-04-05 06:29:55.054749 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_57ff0cb1615946ac9c3c5cdafec8f587 (vhost: /, messages: 0) 2026-04-05 06:29:55.054759 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_9e3745e123f04cb69e36f1ec02003873 (vhost: /, messages: 0) 2026-04-05 06:29:55.054805 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_b73d144196594abcb7d7f9a6d6e8730a (vhost: /, messages: 0) 2026-04-05 06:29:55.054817 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_c86f3dbbf51743fdb2448de3c99a2c59 (vhost: /, messages: 0) 2026-04-05 06:29:55.054828 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_d62c237a5dcd4effa55b75d9cb09060e (vhost: /, messages: 0) 2026-04-05 06:29:55.054839 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-plugin_fanout_fee1de96c0e9406c8a7c8c9baec32cf6 (vhost: /, messages: 0) 2026-04-05 06:29:55.054851 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-04-05 06:29:55.054862 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.054885 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.054896 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.054918 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_133ee854161e4238b0c4eb10e96938f3 (vhost: /, messages: 0) 2026-04-05 06:29:55.054930 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_1705bb475f8648a6a3d06d4e22c183cf (vhost: /, messages: 0) 2026-04-05 06:29:55.054941 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_29cc191b7cb54c7b81e42f5dfdc72cbe (vhost: /, messages: 0) 2026-04-05 06:29:55.056106 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_3272c1893bf946e6bd8aeecd1669acd3 (vhost: /, messages: 0) 2026-04-05 06:29:55.056131 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_64e6a82af8a84020986efd2aadc8affc (vhost: /, messages: 0) 2026-04-05 06:29:55.056142 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_96e119749bb340f597eb3c9648901053 (vhost: /, messages: 0) 2026-04-05 06:29:55.056153 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_9ab3ae7144f44cd0a64277fd7a63dc16 (vhost: /, messages: 0) 2026-04-05 06:29:55.056164 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_a77a6c571c00408fb4b3b48bbc00bd6e (vhost: /, messages: 0) 2026-04-05 06:29:55.056174 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_b9f9bb5fd8454b3092428a73992052e8 (vhost: /, messages: 0) 2026-04-05 06:29:55.056261 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_c0fd7895513943c7a1fd70348e134ce5 (vhost: /, messages: 0) 2026-04-05 06:29:55.056274 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_c827ad3fa9284a5bb42c1b34e174ae02 (vhost: /, messages: 0) 2026-04-05 06:29:55.056349 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_ce58beb3e33e46b4a268d183329d5625 (vhost: /, messages: 0) 2026-04-05 06:29:55.056382 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_d8e67a052381430285f580f8f2a06a80 (vhost: /, messages: 0) 2026-04-05 06:29:55.056393 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_dbba2d212c2d48b08ad7a306ea82b584 (vhost: /, messages: 0) 2026-04-05 06:29:55.056403 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_ea585a83ceb2426289426ef5fc053a39 (vhost: /, messages: 0) 2026-04-05 06:29:55.056414 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_ef9feae2972f426094a6df7c69481917 (vhost: /, messages: 0) 2026-04-05 06:29:55.056425 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_f7911e950d994e14b0da35ba86ac3a32 (vhost: /, messages: 0) 2026-04-05 06:29:55.056436 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-reports-plugin_fanout_faedd97aa49348229d8406d90b0e45b0 (vhost: /, messages: 0) 2026-04-05 06:29:55.056582 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-04-05 06:29:55.056597 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.056608 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.056619 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.056700 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_123c531c331f45d68339f20722ea43db (vhost: /, messages: 0) 2026-04-05 06:29:55.056717 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_1f2fdbc809564dd18b63c894ba276b30 (vhost: /, messages: 0) 2026-04-05 06:29:55.056729 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_5bdb91a1240c45a3baeca00f6cbea67a (vhost: /, messages: 0) 2026-04-05 06:29:55.056740 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_ae46c563599e4bc88b4dcb90153da499 (vhost: /, messages: 0) 2026-04-05 06:29:55.056750 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_bffb21ff9872465a93019a510b2a8722 (vhost: /, messages: 0) 2026-04-05 06:29:55.056761 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_c16e20888dd54ae6b91bec430003051d (vhost: /, messages: 0) 2026-04-05 06:29:55.056883 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_ee1776e97b5247e8b9e0884343d13048 (vhost: /, messages: 0) 2026-04-05 06:29:55.056909 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - q-server-resource-versions_fanout_f61797c117634bb1862a4656ecd5db48 (vhost: /, messages: 0) 2026-04-05 06:29:55.056921 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_07a419e38f19430b9314d98eed01cba2 (vhost: /, messages: 0) 2026-04-05 06:29:55.056931 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_0a63a28b086b4251b3fa295a09058239 (vhost: /, messages: 0) 2026-04-05 06:29:55.057015 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_12083e098ee44be9b44e430ba01faabe (vhost: /, messages: 0) 2026-04-05 06:29:55.057033 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_20a8cdd4da9f4025b7aabd80ad19d44a (vhost: /, messages: 0) 2026-04-05 06:29:55.057260 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_24f9d61dfc0a4b65a891ba0981bf8be0 (vhost: /, messages: 0) 2026-04-05 06:29:55.057281 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_2bcb4a1834a3476c9ff3efe3996e6279 (vhost: /, messages: 0) 2026-04-05 06:29:55.057292 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_3a57173f55ee4d168922c1fbc207cbd5 (vhost: /, messages: 0) 2026-04-05 06:29:55.057521 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_48a490fb8e0949b5ada240f164cc30fe (vhost: /, messages: 0) 2026-04-05 06:29:55.057589 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_5ad84183739c48cb95db6208c9c5ca6e (vhost: /, messages: 0) 2026-04-05 06:29:55.057665 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_5edcf49a330a421eb5777d81b8755cdd (vhost: /, messages: 0) 2026-04-05 06:29:55.057680 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_5fb05cb52f484dd2814e72d0e2ee98c0 (vhost: /, messages: 0) 2026-04-05 06:29:55.057691 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_5fb106242aae4410b594d35b6a1bbd1b (vhost: /, messages: 0) 2026-04-05 06:29:55.057707 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_6d9fad6bdfd74bdd99d538ec2f66fbcb (vhost: /, messages: 0) 2026-04-05 06:29:55.057718 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_712981ae54bc4e6587b145067f4ec85a (vhost: /, messages: 0) 2026-04-05 06:29:55.057729 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_8876fac2b0924ff4a6c462edf94c4e8d (vhost: /, messages: 0) 2026-04-05 06:29:55.058008 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_8f343648f282475bb94dd5265e7d1303 (vhost: /, messages: 0) 2026-04-05 06:29:55.058074 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_9e1adf0f74174ed19a8788e594c696fa (vhost: /, messages: 0) 2026-04-05 06:29:55.058086 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_dfa5756900bc4c7bbced0b497ca4d3ce (vhost: /, messages: 1) 2026-04-05 06:29:55.058097 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - reply_fd8c1c1a85584e2690ac0571d126c11b (vhost: /, messages: 0) 2026-04-05 06:29:55.058402 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-04-05 06:29:55.058440 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.058461 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.058476 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.058484 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler_fanout_33fc698549524854a998a0cfcb8745b5 (vhost: /, messages: 0) 2026-04-05 06:29:55.061118 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler_fanout_546d399f7a7949239facb671d12dc41d (vhost: /, messages: 0) 2026-04-05 06:29:55.061188 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler_fanout_738a2fe057c147bab687cc0c59e67e0f (vhost: /, messages: 0) 2026-04-05 06:29:55.061202 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler_fanout_8170dc328c2c4832aa5f3ea2d530c68d (vhost: /, messages: 0) 2026-04-05 06:29:55.061214 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler_fanout_e413eea5c1144b39a32a74b19848964a (vhost: /, messages: 0) 2026-04-05 06:29:55.061225 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - scheduler_fanout_fa1ac23e4b03440d9d94e315fd6cef91 (vhost: /, messages: 0) 2026-04-05 06:29:55.061236 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker (vhost: /, messages: 0) 2026-04-05 06:29:55.061248 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-04-05 06:29:55.061260 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-04-05 06:29:55.061299 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-04-05 06:29:55.061312 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker_fanout_31de4b73eef6424b9e0fb59b5684b6c7 (vhost: /, messages: 0) 2026-04-05 06:29:55.061348 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker_fanout_3334b2477c5d4120b0ac4268c9480f64 (vhost: /, messages: 0) 2026-04-05 06:29:55.061359 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker_fanout_ab1fa9e0318d418db1446a10829cad0f (vhost: /, messages: 0) 2026-04-05 06:29:55.061370 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker_fanout_b4322b25e937465caffc6999bacf90f2 (vhost: /, messages: 0) 2026-04-05 06:29:55.061381 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker_fanout_b4898e808be14f60a5861b12b1fa428e (vhost: /, messages: 0) 2026-04-05 06:29:55.061392 | orchestrator | 2026-04-05 06:29:55 | INFO  |  - worker_fanout_ef2ef92dedcd43e2825ab858da35f1fc (vhost: /, messages: 0) 2026-04-05 06:29:55.385131 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-05 06:30:01.720173 | orchestrator | 2026-04-05 06:30:01 | ERROR  | Unable to get ansible vault password 2026-04-05 06:30:01.720289 | orchestrator | 2026-04-05 06:30:01 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 06:30:01.720316 | orchestrator | 2026-04-05 06:30:01 | ERROR  | Dropping encrypted entries 2026-04-05 06:30:01.754943 | orchestrator | 2026-04-05 06:30:01 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 06:30:01.781432 | orchestrator | 2026-04-05 06:30:01 | INFO  | Found 46 exchange(s) in vhost '/': 2026-04-05 06:30:01.781520 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - aodh (type: topic, transient) 2026-04-05 06:30:01.781533 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - barbican.workers_fanout (type: fanout, transient) 2026-04-05 06:30:01.781547 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - ceilometer (type: topic, transient) 2026-04-05 06:30:01.781559 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - central_fanout (type: fanout, transient) 2026-04-05 06:30:01.781677 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - cinder (type: topic, transient) 2026-04-05 06:30:01.781692 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - cinder-backup_fanout (type: fanout, transient) 2026-04-05 06:30:01.781714 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - cinder-scheduler_fanout (type: fanout, transient) 2026-04-05 06:30:01.782146 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout (type: fanout, transient) 2026-04-05 06:30:01.782176 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout (type: fanout, transient) 2026-04-05 06:30:01.782433 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout (type: fanout, transient) 2026-04-05 06:30:01.782458 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - cinder-volume_fanout (type: fanout, transient) 2026-04-05 06:30:01.783270 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - compute_fanout (type: fanout, transient) 2026-04-05 06:30:01.783343 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - conductor_fanout (type: fanout, transient) 2026-04-05 06:30:01.783356 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - designate (type: topic, transient) 2026-04-05 06:30:01.783366 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - dns (type: topic, transient) 2026-04-05 06:30:01.783374 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - glance (type: topic, transient) 2026-04-05 06:30:01.783383 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - heat (type: topic, transient) 2026-04-05 06:30:01.783618 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - ironic (type: topic, transient) 2026-04-05 06:30:01.783661 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - keystone (type: topic, transient) 2026-04-05 06:30:01.783670 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - l3_agent_fanout (type: fanout, transient) 2026-04-05 06:30:01.783902 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - magnum (type: topic, transient) 2026-04-05 06:30:01.784203 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - magnum-conductor_fanout (type: fanout, transient) 2026-04-05 06:30:01.784219 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - manila-data_fanout (type: fanout, transient) 2026-04-05 06:30:01.784227 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - manila-scheduler_fanout (type: fanout, transient) 2026-04-05 06:30:01.784249 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - manila-share_fanout (type: fanout, transient) 2026-04-05 06:30:01.784409 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - neutron (type: topic, transient) 2026-04-05 06:30:01.784423 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - neutron-vo-Network-1.1_fanout (type: fanout, transient) 2026-04-05 06:30:01.784644 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - neutron-vo-Port-1.10_fanout (type: fanout, transient) 2026-04-05 06:30:01.784661 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - neutron-vo-SecurityGroup-1.6_fanout (type: fanout, transient) 2026-04-05 06:30:01.784813 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - neutron-vo-SecurityGroupRule-1.3_fanout (type: fanout, transient) 2026-04-05 06:30:01.785041 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - neutron-vo-Subnet-1.2_fanout (type: fanout, transient) 2026-04-05 06:30:01.785203 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - nova (type: topic, transient) 2026-04-05 06:30:01.785217 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - octavia (type: topic, transient) 2026-04-05 06:30:01.785515 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - octavia_provisioning_v2_fanout (type: fanout, transient) 2026-04-05 06:30:01.785530 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - openstack (type: topic, transient) 2026-04-05 06:30:01.785790 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - producer_fanout (type: fanout, transient) 2026-04-05 06:30:01.785807 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - q-agent-notifier-port-update_fanout (type: fanout, transient) 2026-04-05 06:30:01.785816 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - q-agent-notifier-security_group-update_fanout (type: fanout, transient) 2026-04-05 06:30:01.785917 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - q-plugin_fanout (type: fanout, transient) 2026-04-05 06:30:01.785930 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - q-reports-plugin_fanout (type: fanout, transient) 2026-04-05 06:30:01.786215 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - q-server-resource-versions_fanout (type: fanout, transient) 2026-04-05 06:30:01.786231 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - scheduler_fanout (type: fanout, transient) 2026-04-05 06:30:01.786240 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - swift (type: topic, transient) 2026-04-05 06:30:01.786424 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - trove (type: topic, transient) 2026-04-05 06:30:01.786438 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - worker_fanout (type: fanout, transient) 2026-04-05 06:30:01.786447 | orchestrator | 2026-04-05 06:30:01 | INFO  |  - zaqar (type: topic, transient) 2026-04-05 06:30:02.048488 | orchestrator | + osism apply -a upgrade keystone 2026-04-05 06:30:03.393529 | orchestrator | 2026-04-05 06:30:03 | INFO  | Prepare task for execution of keystone. 2026-04-05 06:30:03.458477 | orchestrator | 2026-04-05 06:30:03 | INFO  | Task 2f10bb0c-1b82-4c7b-963e-c6695759a8a8 (keystone) was prepared for execution. 2026-04-05 06:30:03.458548 | orchestrator | 2026-04-05 06:30:03 | INFO  | It takes a moment until task 2f10bb0c-1b82-4c7b-963e-c6695759a8a8 (keystone) has been started and output is visible here. 2026-04-05 06:30:17.334842 | orchestrator | 2026-04-05 06:30:17.334936 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 06:30:17.334947 | orchestrator | 2026-04-05 06:30:17.334955 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 06:30:17.334962 | orchestrator | Sunday 05 April 2026 06:30:08 +0000 (0:00:01.824) 0:00:01.825 ********** 2026-04-05 06:30:17.334969 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:30:17.334977 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:30:17.334985 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:30:17.334991 | orchestrator | 2026-04-05 06:30:17.334999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 06:30:17.335006 | orchestrator | Sunday 05 April 2026 06:30:10 +0000 (0:00:01.677) 0:00:03.502 ********** 2026-04-05 06:30:17.335013 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-05 06:30:17.335020 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-05 06:30:17.335027 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-05 06:30:17.335033 | orchestrator | 2026-04-05 06:30:17.335040 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-05 06:30:17.335047 | orchestrator | 2026-04-05 06:30:17.335053 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 06:30:17.335060 | orchestrator | Sunday 05 April 2026 06:30:11 +0000 (0:00:01.666) 0:00:05.169 ********** 2026-04-05 06:30:17.335067 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:30:17.335074 | orchestrator | 2026-04-05 06:30:17.335081 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-05 06:30:17.335088 | orchestrator | Sunday 05 April 2026 06:30:15 +0000 (0:00:03.246) 0:00:08.415 ********** 2026-04-05 06:30:17.335111 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:17.335122 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:17.335163 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:17.335173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:17.335185 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:17.335193 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:17.335199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:17.335213 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:17.335225 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:28.600258 | orchestrator | 2026-04-05 06:30:28.600376 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-05 06:30:28.600393 | orchestrator | Sunday 05 April 2026 06:30:18 +0000 (0:00:03.416) 0:00:11.832 ********** 2026-04-05 06:30:28.600405 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:28.600418 | orchestrator | 2026-04-05 06:30:28.600429 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-05 06:30:28.600441 | orchestrator | Sunday 05 April 2026 06:30:19 +0000 (0:00:01.116) 0:00:12.949 ********** 2026-04-05 06:30:28.600452 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:28.600463 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:30:28.600474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:30:28.600485 | orchestrator | 2026-04-05 06:30:28.600496 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-05 06:30:28.600507 | orchestrator | Sunday 05 April 2026 06:30:21 +0000 (0:00:01.629) 0:00:14.579 ********** 2026-04-05 06:30:28.600517 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 06:30:28.600528 | orchestrator | 2026-04-05 06:30:28.600539 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 06:30:28.600550 | orchestrator | Sunday 05 April 2026 06:30:23 +0000 (0:00:02.345) 0:00:16.925 ********** 2026-04-05 06:30:28.600562 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:30:28.600573 | orchestrator | 2026-04-05 06:30:28.600584 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-05 06:30:28.600595 | orchestrator | Sunday 05 April 2026 06:30:25 +0000 (0:00:01.937) 0:00:18.863 ********** 2026-04-05 06:30:28.600628 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:28.600669 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:28.600701 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:28.600715 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:28.600776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:28.600789 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:28.600809 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:28.600822 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:28.600833 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:28.600844 | orchestrator | 2026-04-05 06:30:28.600863 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-05 06:30:32.002521 | orchestrator | Sunday 05 April 2026 06:30:29 +0000 (0:00:04.043) 0:00:22.906 ********** 2026-04-05 06:30:32.002655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:30:32.002708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:32.002820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:30:32.002844 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:32.002865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:30:32.002911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:32.002935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:30:32.002955 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:30:32.002984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:30:32.003013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:32.003026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:30:32.003037 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:30:32.003049 | orchestrator | 2026-04-05 06:30:32.003064 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-05 06:30:32.003078 | orchestrator | Sunday 05 April 2026 06:30:31 +0000 (0:00:02.032) 0:00:24.939 ********** 2026-04-05 06:30:32.003103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:30:34.733419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:34.733551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:30:34.733591 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:34.733608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:30:34.733622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:34.733634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:30:34.733646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:30:34.733679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:30:34.733706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:34.733804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:30:34.733821 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:30:34.733834 | orchestrator | 2026-04-05 06:30:34.733847 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-05 06:30:34.733862 | orchestrator | Sunday 05 April 2026 06:30:33 +0000 (0:00:01.741) 0:00:26.680 ********** 2026-04-05 06:30:34.733875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:34.733898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:40.574228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:40.574349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:40.574365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:40.574372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:30:40.574380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:40.574403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:40.574423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:40.574432 | orchestrator | 2026-04-05 06:30:40.574441 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-05 06:30:40.574450 | orchestrator | Sunday 05 April 2026 06:30:37 +0000 (0:00:04.342) 0:00:31.023 ********** 2026-04-05 06:30:40.574458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:40.574467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:40.574475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:40.574490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:51.880584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:30:51.880674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:51.880686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:51.880694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:51.880735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:30:51.880762 | orchestrator | 2026-04-05 06:30:51.880771 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-05 06:30:51.880779 | orchestrator | Sunday 05 April 2026 06:30:43 +0000 (0:00:06.266) 0:00:37.289 ********** 2026-04-05 06:30:51.880786 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:30:51.880794 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:30:51.880801 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:30:51.880808 | orchestrator | 2026-04-05 06:30:51.880815 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-05 06:30:51.880822 | orchestrator | Sunday 05 April 2026 06:30:46 +0000 (0:00:02.586) 0:00:39.876 ********** 2026-04-05 06:30:51.880828 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:51.880850 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:30:51.880857 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:30:51.880864 | orchestrator | 2026-04-05 06:30:51.880870 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-05 06:30:51.880877 | orchestrator | Sunday 05 April 2026 06:30:48 +0000 (0:00:01.624) 0:00:41.501 ********** 2026-04-05 06:30:51.880884 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:51.880895 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:30:51.880902 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:30:51.880908 | orchestrator | 2026-04-05 06:30:51.880915 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-05 06:30:51.880922 | orchestrator | Sunday 05 April 2026 06:30:49 +0000 (0:00:01.424) 0:00:42.925 ********** 2026-04-05 06:30:51.880928 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:51.880935 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:30:51.880941 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:30:51.880948 | orchestrator | 2026-04-05 06:30:51.880955 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-05 06:30:51.880961 | orchestrator | Sunday 05 April 2026 06:30:51 +0000 (0:00:01.837) 0:00:44.763 ********** 2026-04-05 06:30:51.880969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:30:51.880977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:30:51.880984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:30:51.880996 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:30:51.881009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:31:17.569965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:31:17.570186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:31:17.570222 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:31:17.570249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:31:17.570304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:31:17.570349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:31:17.570384 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:31:17.570406 | orchestrator | 2026-04-05 06:31:17.570426 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-05 06:31:17.570446 | orchestrator | Sunday 05 April 2026 06:30:53 +0000 (0:00:01.741) 0:00:46.504 ********** 2026-04-05 06:31:17.570459 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:31:17.570472 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:31:17.570483 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:31:17.570496 | orchestrator | 2026-04-05 06:31:17.570509 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-05 06:31:17.570541 | orchestrator | Sunday 05 April 2026 06:30:54 +0000 (0:00:01.411) 0:00:47.916 ********** 2026-04-05 06:31:17.570564 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 06:31:17.570578 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 06:31:17.570590 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-05 06:31:17.570602 | orchestrator | 2026-04-05 06:31:17.570615 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-05 06:31:17.570627 | orchestrator | Sunday 05 April 2026 06:30:57 +0000 (0:00:03.073) 0:00:50.989 ********** 2026-04-05 06:31:17.570640 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 06:31:17.570652 | orchestrator | 2026-04-05 06:31:17.570664 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-05 06:31:17.570708 | orchestrator | Sunday 05 April 2026 06:30:59 +0000 (0:00:02.029) 0:00:53.018 ********** 2026-04-05 06:31:17.570722 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:31:17.570734 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:31:17.570746 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:31:17.570758 | orchestrator | 2026-04-05 06:31:17.570771 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-05 06:31:17.570784 | orchestrator | Sunday 05 April 2026 06:31:01 +0000 (0:00:01.750) 0:00:54.769 ********** 2026-04-05 06:31:17.570794 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 06:31:17.570805 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 06:31:17.570815 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 06:31:17.570826 | orchestrator | 2026-04-05 06:31:17.570836 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-05 06:31:17.570859 | orchestrator | Sunday 05 April 2026 06:31:03 +0000 (0:00:02.253) 0:00:57.023 ********** 2026-04-05 06:31:17.570870 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:31:17.570880 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:31:17.570891 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:31:17.570901 | orchestrator | 2026-04-05 06:31:17.570912 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-05 06:31:17.570922 | orchestrator | Sunday 05 April 2026 06:31:05 +0000 (0:00:01.424) 0:00:58.448 ********** 2026-04-05 06:31:17.570933 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 06:31:17.570944 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 06:31:17.570954 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-05 06:31:17.570965 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 06:31:17.570977 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 06:31:17.570988 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-05 06:31:17.570998 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 06:31:17.571009 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 06:31:17.571019 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-05 06:31:17.571030 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 06:31:17.571041 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 06:31:17.571051 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-05 06:31:17.571062 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 06:31:17.571072 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 06:31:17.571083 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-05 06:31:17.571094 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 06:31:17.571104 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 06:31:17.571115 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 06:31:17.571126 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 06:31:17.571136 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 06:31:17.571147 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 06:31:17.571157 | orchestrator | 2026-04-05 06:31:17.571167 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-05 06:31:17.571178 | orchestrator | Sunday 05 April 2026 06:31:15 +0000 (0:00:09.930) 0:01:08.379 ********** 2026-04-05 06:31:17.571189 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 06:31:17.571199 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 06:31:17.571210 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 06:31:17.571221 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 06:31:17.571238 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 06:31:25.426912 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 06:31:25.427079 | orchestrator | 2026-04-05 06:31:25.427110 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-05 06:31:25.427132 | orchestrator | Sunday 05 April 2026 06:31:19 +0000 (0:00:04.147) 0:01:12.526 ********** 2026-04-05 06:31:25.427158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:31:25.427185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:31:25.427206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-05 06:31:25.427249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:31:25.427273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:31:25.427285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-05 06:31:25.427297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:31:25.427309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:31:25.427320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-05 06:31:25.427332 | orchestrator | 2026-04-05 06:31:25.427343 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-05 06:31:25.427354 | orchestrator | Sunday 05 April 2026 06:31:23 +0000 (0:00:04.346) 0:01:16.873 ********** 2026-04-05 06:31:25.427365 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 06:31:25.427377 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:31:25.427388 | orchestrator | } 2026-04-05 06:31:25.427399 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 06:31:25.427410 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:31:25.427427 | orchestrator | } 2026-04-05 06:31:25.427438 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 06:31:25.427449 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:31:25.427459 | orchestrator | } 2026-04-05 06:31:25.427470 | orchestrator | 2026-04-05 06:31:25.427481 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 06:31:25.427492 | orchestrator | Sunday 05 April 2026 06:31:25 +0000 (0:00:01.510) 0:01:18.384 ********** 2026-04-05 06:31:25.427519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:33:47.073710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:33:47.073831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:33:47.073851 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:33:47.073869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:33:47.073912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:33:47.073939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:33:47.073951 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:33:47.073983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-05 06:33:47.074002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-05 06:33:47.074101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-05 06:33:47.074123 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:33:47.074135 | orchestrator | 2026-04-05 06:33:47.074147 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-04-05 06:33:47.074162 | orchestrator | Sunday 05 April 2026 06:31:27 +0000 (0:00:01.966) 0:01:20.350 ********** 2026-04-05 06:33:47.074188 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:33:47.074201 | orchestrator | 2026-04-05 06:33:47.074214 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-04-05 06:33:47.074227 | orchestrator | Sunday 05 April 2026 06:31:30 +0000 (0:00:03.140) 0:01:23.491 ********** 2026-04-05 06:33:47.074242 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:33:47.074262 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:33:47.074281 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:33:47.074302 | orchestrator | 2026-04-05 06:33:47.074321 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-04-05 06:33:47.074338 | orchestrator | Sunday 05 April 2026 06:31:31 +0000 (0:00:01.597) 0:01:25.088 ********** 2026-04-05 06:33:47.074352 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:33:47.074365 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:33:47.074377 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:33:47.074390 | orchestrator | 2026-04-05 06:33:47.074403 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 06:33:47.074416 | orchestrator | Sunday 05 April 2026 06:31:33 +0000 (0:00:01.979) 0:01:27.068 ********** 2026-04-05 06:33:47.074426 | orchestrator | 2026-04-05 06:33:47.074437 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 06:33:47.074447 | orchestrator | Sunday 05 April 2026 06:31:34 +0000 (0:00:00.462) 0:01:27.530 ********** 2026-04-05 06:33:47.074458 | orchestrator | 2026-04-05 06:33:47.074469 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-05 06:33:47.074479 | orchestrator | Sunday 05 April 2026 06:31:34 +0000 (0:00:00.451) 0:01:27.982 ********** 2026-04-05 06:33:47.074490 | orchestrator | 2026-04-05 06:33:47.074507 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-04-05 06:33:47.074518 | orchestrator | Sunday 05 April 2026 06:31:35 +0000 (0:00:00.805) 0:01:28.787 ********** 2026-04-05 06:33:47.074560 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:33:47.074572 | orchestrator | 2026-04-05 06:33:47.074583 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-05 06:33:47.074594 | orchestrator | Sunday 05 April 2026 06:32:39 +0000 (0:01:03.791) 0:02:32.579 ********** 2026-04-05 06:33:47.074605 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:33:47.074616 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:33:47.074626 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:33:47.074637 | orchestrator | 2026-04-05 06:33:47.074647 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-05 06:33:47.074660 | orchestrator | Sunday 05 April 2026 06:33:34 +0000 (0:00:54.762) 0:03:27.341 ********** 2026-04-05 06:33:47.074678 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:33:47.074696 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:33:47.074714 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:33:47.074732 | orchestrator | 2026-04-05 06:33:47.074751 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-05 06:33:47.074781 | orchestrator | Sunday 05 April 2026 06:33:47 +0000 (0:00:13.027) 0:03:40.369 ********** 2026-04-05 06:34:17.689838 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:34:17.690010 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:34:17.690114 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:34:17.690140 | orchestrator | 2026-04-05 06:34:17.690177 | orchestrator | RUNNING HANDLER [keystone : Finish keystone database upgrade] ****************** 2026-04-05 06:34:17.690203 | orchestrator | Sunday 05 April 2026 06:34:01 +0000 (0:00:14.113) 0:03:54.483 ********** 2026-04-05 06:34:17.690225 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:34:17.690247 | orchestrator | 2026-04-05 06:34:17.690268 | orchestrator | TASK [keystone : Disable log_bin_trust_function_creators function] ************* 2026-04-05 06:34:17.690289 | orchestrator | Sunday 05 April 2026 06:34:13 +0000 (0:00:12.683) 0:04:07.166 ********** 2026-04-05 06:34:17.690311 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:34:17.690369 | orchestrator | 2026-04-05 06:34:17.690392 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 06:34:17.690415 | orchestrator | testbed-node-0 : ok=25  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 06:34:17.690439 | orchestrator | testbed-node-1 : ok=19  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-05 06:34:17.690460 | orchestrator | testbed-node-2 : ok=21  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-05 06:34:17.690480 | orchestrator | 2026-04-05 06:34:17.690535 | orchestrator | 2026-04-05 06:34:17.690558 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 06:34:17.690580 | orchestrator | Sunday 05 April 2026 06:34:17 +0000 (0:00:03.408) 0:04:10.575 ********** 2026-04-05 06:34:17.690609 | orchestrator | =============================================================================== 2026-04-05 06:34:17.690630 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 63.79s 2026-04-05 06:34:17.690649 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 54.76s 2026-04-05 06:34:17.690669 | orchestrator | keystone : Restart keystone container ---------------------------------- 14.11s 2026-04-05 06:34:17.690689 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 13.03s 2026-04-05 06:34:17.690709 | orchestrator | keystone : Finish keystone database upgrade ---------------------------- 12.68s 2026-04-05 06:34:17.690730 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.93s 2026-04-05 06:34:17.690752 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.27s 2026-04-05 06:34:17.690771 | orchestrator | service-check-containers : keystone | Check containers ------------------ 4.35s 2026-04-05 06:34:17.690788 | orchestrator | keystone : Copying over config.json files for services ------------------ 4.34s 2026-04-05 06:34:17.690799 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 4.15s 2026-04-05 06:34:17.690810 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 4.04s 2026-04-05 06:34:17.690821 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 3.42s 2026-04-05 06:34:17.690832 | orchestrator | keystone : Disable log_bin_trust_function_creators function ------------- 3.41s 2026-04-05 06:34:17.690843 | orchestrator | keystone : include_tasks ------------------------------------------------ 3.25s 2026-04-05 06:34:17.690853 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 3.14s 2026-04-05 06:34:17.690864 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 3.07s 2026-04-05 06:34:17.690875 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.59s 2026-04-05 06:34:17.690886 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 2.35s 2026-04-05 06:34:17.690897 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 2.25s 2026-04-05 06:34:17.690907 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 2.03s 2026-04-05 06:34:17.877281 | orchestrator | + osism apply -a upgrade placement 2026-04-05 06:34:19.137022 | orchestrator | 2026-04-05 06:34:19 | INFO  | Prepare task for execution of placement. 2026-04-05 06:34:19.202558 | orchestrator | 2026-04-05 06:34:19 | INFO  | Task d9e0ecfd-c2bd-40cf-be8b-1ec11e860540 (placement) was prepared for execution. 2026-04-05 06:34:19.202988 | orchestrator | 2026-04-05 06:34:19 | INFO  | It takes a moment until task d9e0ecfd-c2bd-40cf-be8b-1ec11e860540 (placement) has been started and output is visible here. 2026-04-05 06:35:14.229004 | orchestrator | 2026-04-05 06:35:14.229100 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 06:35:14.229111 | orchestrator | 2026-04-05 06:35:14.229120 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 06:35:14.229148 | orchestrator | Sunday 05 April 2026 06:34:24 +0000 (0:00:01.482) 0:00:01.482 ********** 2026-04-05 06:35:14.229155 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:35:14.229164 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:35:14.229171 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:35:14.229178 | orchestrator | 2026-04-05 06:35:14.229185 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 06:35:14.229193 | orchestrator | Sunday 05 April 2026 06:34:26 +0000 (0:00:02.007) 0:00:03.490 ********** 2026-04-05 06:35:14.229201 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-05 06:35:14.229208 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-05 06:35:14.229215 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-05 06:35:14.229222 | orchestrator | 2026-04-05 06:35:14.229229 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-05 06:35:14.229237 | orchestrator | 2026-04-05 06:35:14.229244 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 06:35:14.229251 | orchestrator | Sunday 05 April 2026 06:34:29 +0000 (0:00:03.827) 0:00:07.317 ********** 2026-04-05 06:35:14.229258 | orchestrator | included: /ansible/roles/placement/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:35:14.229266 | orchestrator | 2026-04-05 06:35:14.229273 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-05 06:35:14.229280 | orchestrator | Sunday 05 April 2026 06:34:32 +0000 (0:00:02.202) 0:00:09.520 ********** 2026-04-05 06:35:14.229287 | orchestrator | ok: [testbed-node-0] => (item=placement (placement)) 2026-04-05 06:35:14.229295 | orchestrator | 2026-04-05 06:35:14.229303 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-05 06:35:14.229310 | orchestrator | Sunday 05 April 2026 06:34:37 +0000 (0:00:05.187) 0:00:14.707 ********** 2026-04-05 06:35:14.229317 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-05 06:35:14.229325 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-05 06:35:14.229332 | orchestrator | 2026-04-05 06:35:14.229339 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-05 06:35:14.229346 | orchestrator | Sunday 05 April 2026 06:34:44 +0000 (0:00:07.567) 0:00:22.274 ********** 2026-04-05 06:35:14.229353 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 06:35:14.229361 | orchestrator | 2026-04-05 06:35:14.229368 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-05 06:35:14.229375 | orchestrator | Sunday 05 April 2026 06:34:49 +0000 (0:00:04.322) 0:00:26.597 ********** 2026-04-05 06:35:14.229382 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-05 06:35:14.229389 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 06:35:14.229396 | orchestrator | 2026-04-05 06:35:14.229403 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-05 06:35:14.229410 | orchestrator | Sunday 05 April 2026 06:34:55 +0000 (0:00:06.347) 0:00:32.945 ********** 2026-04-05 06:35:14.229417 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 06:35:14.229425 | orchestrator | 2026-04-05 06:35:14.229432 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-05 06:35:14.229439 | orchestrator | Sunday 05 April 2026 06:34:59 +0000 (0:00:04.323) 0:00:37.269 ********** 2026-04-05 06:35:14.229525 | orchestrator | ok: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-05 06:35:14.229535 | orchestrator | 2026-04-05 06:35:14.229543 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 06:35:14.229550 | orchestrator | Sunday 05 April 2026 06:35:04 +0000 (0:00:05.033) 0:00:42.302 ********** 2026-04-05 06:35:14.229557 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:35:14.229565 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:35:14.229580 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:35:14.229588 | orchestrator | 2026-04-05 06:35:14.229596 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-05 06:35:14.229605 | orchestrator | Sunday 05 April 2026 06:35:06 +0000 (0:00:01.765) 0:00:44.067 ********** 2026-04-05 06:35:14.229646 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:14.229659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:14.229669 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:14.229679 | orchestrator | 2026-04-05 06:35:14.229687 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-05 06:35:14.229696 | orchestrator | Sunday 05 April 2026 06:35:08 +0000 (0:00:02.111) 0:00:46.179 ********** 2026-04-05 06:35:14.229704 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:35:14.229712 | orchestrator | 2026-04-05 06:35:14.229721 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-05 06:35:14.229729 | orchestrator | Sunday 05 April 2026 06:35:09 +0000 (0:00:01.094) 0:00:47.274 ********** 2026-04-05 06:35:14.229747 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:35:14.229755 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:35:14.229763 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:35:14.229772 | orchestrator | 2026-04-05 06:35:14.229780 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-05 06:35:14.229788 | orchestrator | Sunday 05 April 2026 06:35:11 +0000 (0:00:01.349) 0:00:48.623 ********** 2026-04-05 06:35:14.229797 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:35:14.229806 | orchestrator | 2026-04-05 06:35:14.229814 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-05 06:35:14.229823 | orchestrator | Sunday 05 April 2026 06:35:13 +0000 (0:00:01.861) 0:00:50.484 ********** 2026-04-05 06:35:14.229840 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:17.882752 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:17.882864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:17.882912 | orchestrator | 2026-04-05 06:35:17.882926 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-05 06:35:17.882939 | orchestrator | Sunday 05 April 2026 06:35:15 +0000 (0:00:02.501) 0:00:52.986 ********** 2026-04-05 06:35:17.882952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:17.882965 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:35:17.883011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:17.883025 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:35:17.883037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:17.883049 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:35:17.883060 | orchestrator | 2026-04-05 06:35:17.883071 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-05 06:35:17.883083 | orchestrator | Sunday 05 April 2026 06:35:17 +0000 (0:00:01.800) 0:00:54.786 ********** 2026-04-05 06:35:17.883094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:17.883115 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:35:17.883126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:17.883138 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:35:17.883164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:33.207286 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:35:33.207389 | orchestrator | 2026-04-05 06:35:33.207402 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-05 06:35:33.207445 | orchestrator | Sunday 05 April 2026 06:35:19 +0000 (0:00:01.639) 0:00:56.426 ********** 2026-04-05 06:35:33.207464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:33.207515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:33.207549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:33.207561 | orchestrator | 2026-04-05 06:35:33.207570 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-05 06:35:33.207579 | orchestrator | Sunday 05 April 2026 06:35:21 +0000 (0:00:02.495) 0:00:58.921 ********** 2026-04-05 06:35:33.207605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:33.207623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:33.207633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:33.207643 | orchestrator | 2026-04-05 06:35:33.207653 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-05 06:35:33.207661 | orchestrator | Sunday 05 April 2026 06:35:25 +0000 (0:00:03.721) 0:01:02.643 ********** 2026-04-05 06:35:33.207670 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-05 06:35:33.207680 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:35:33.207689 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-05 06:35:33.207697 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:35:33.207711 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-05 06:35:33.207719 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:35:33.207728 | orchestrator | 2026-04-05 06:35:33.207737 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-05 06:35:33.207745 | orchestrator | Sunday 05 April 2026 06:35:26 +0000 (0:00:01.629) 0:01:04.273 ********** 2026-04-05 06:35:33.207754 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:35:33.207764 | orchestrator | 2026-04-05 06:35:33.207772 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-05 06:35:33.207781 | orchestrator | Sunday 05 April 2026 06:35:28 +0000 (0:00:01.918) 0:01:06.192 ********** 2026-04-05 06:35:33.207789 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:35:33.207798 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:35:33.207806 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:35:33.207815 | orchestrator | 2026-04-05 06:35:33.207823 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-05 06:35:33.207838 | orchestrator | Sunday 05 April 2026 06:35:31 +0000 (0:00:02.915) 0:01:09.107 ********** 2026-04-05 06:35:33.207849 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:35:33.207859 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:35:33.207869 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:35:33.207879 | orchestrator | 2026-04-05 06:35:33.207895 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-05 06:35:40.661276 | orchestrator | Sunday 05 April 2026 06:35:34 +0000 (0:00:02.534) 0:01:11.642 ********** 2026-04-05 06:35:40.661738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:40.661779 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:35:40.661793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:40.661805 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:35:40.661835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:35:40.661848 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:35:40.661859 | orchestrator | 2026-04-05 06:35:40.661871 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-05 06:35:40.661905 | orchestrator | Sunday 05 April 2026 06:35:36 +0000 (0:00:02.142) 0:01:13.785 ********** 2026-04-05 06:35:40.661941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:40.661955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:40.661969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 06:35:40.661980 | orchestrator | 2026-04-05 06:35:40.661992 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-05 06:35:40.662004 | orchestrator | Sunday 05 April 2026 06:35:38 +0000 (0:00:02.444) 0:01:16.230 ********** 2026-04-05 06:35:40.662074 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 06:35:40.662088 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:35:40.662100 | orchestrator | } 2026-04-05 06:35:40.662111 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 06:35:40.662121 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:35:40.662132 | orchestrator | } 2026-04-05 06:35:40.662154 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 06:35:40.662180 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:35:40.662191 | orchestrator | } 2026-04-05 06:35:40.662202 | orchestrator | 2026-04-05 06:35:40.662213 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 06:35:40.662224 | orchestrator | Sunday 05 April 2026 06:35:40 +0000 (0:00:01.570) 0:01:17.800 ********** 2026-04-05 06:35:40.662245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:36:32.051963 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:36:32.052090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:36:32.052112 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:36:32.052125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 06:36:32.052138 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:36:32.052149 | orchestrator | 2026-04-05 06:36:32.052161 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-05 06:36:32.052174 | orchestrator | Sunday 05 April 2026 06:35:42 +0000 (0:00:02.020) 0:01:19.820 ********** 2026-04-05 06:36:32.052211 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:36:32.052224 | orchestrator | 2026-04-05 06:36:32.052235 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-05 06:36:32.052246 | orchestrator | Sunday 05 April 2026 06:35:45 +0000 (0:00:03.062) 0:01:22.883 ********** 2026-04-05 06:36:32.052257 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:36:32.052267 | orchestrator | 2026-04-05 06:36:32.052278 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-05 06:36:32.052289 | orchestrator | Sunday 05 April 2026 06:35:49 +0000 (0:00:03.534) 0:01:26.418 ********** 2026-04-05 06:36:32.052300 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:36:32.052310 | orchestrator | 2026-04-05 06:36:32.052335 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 06:36:32.052397 | orchestrator | Sunday 05 April 2026 06:36:04 +0000 (0:00:15.914) 0:01:42.333 ********** 2026-04-05 06:36:32.052408 | orchestrator | 2026-04-05 06:36:32.052419 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 06:36:32.052430 | orchestrator | Sunday 05 April 2026 06:36:05 +0000 (0:00:00.452) 0:01:42.785 ********** 2026-04-05 06:36:32.052441 | orchestrator | 2026-04-05 06:36:32.052451 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-05 06:36:32.052462 | orchestrator | Sunday 05 April 2026 06:36:05 +0000 (0:00:00.448) 0:01:43.233 ********** 2026-04-05 06:36:32.052472 | orchestrator | 2026-04-05 06:36:32.052486 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-05 06:36:32.052500 | orchestrator | Sunday 05 April 2026 06:36:06 +0000 (0:00:00.805) 0:01:44.038 ********** 2026-04-05 06:36:32.052512 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:36:32.052525 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:36:32.052537 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:36:32.052551 | orchestrator | 2026-04-05 06:36:32.052563 | orchestrator | TASK [placement : Perform Placement online data migration] ********************* 2026-04-05 06:36:32.052576 | orchestrator | Sunday 05 April 2026 06:36:19 +0000 (0:00:12.868) 0:01:56.907 ********** 2026-04-05 06:36:32.052588 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:36:32.052601 | orchestrator | 2026-04-05 06:36:32.052614 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 06:36:32.052628 | orchestrator | testbed-node-0 : ok=24  changed=9  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-05 06:36:32.052658 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 06:36:32.052673 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 06:36:32.052685 | orchestrator | 2026-04-05 06:36:32.052698 | orchestrator | 2026-04-05 06:36:32.052711 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 06:36:32.052725 | orchestrator | Sunday 05 April 2026 06:36:31 +0000 (0:00:12.190) 0:02:09.098 ********** 2026-04-05 06:36:32.052738 | orchestrator | =============================================================================== 2026-04-05 06:36:32.052751 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.91s 2026-04-05 06:36:32.052764 | orchestrator | placement : Restart placement-api container ---------------------------- 12.87s 2026-04-05 06:36:32.052777 | orchestrator | placement : Perform Placement online data migration -------------------- 12.19s 2026-04-05 06:36:32.052789 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.57s 2026-04-05 06:36:32.052802 | orchestrator | service-ks-register : placement | Creating users ------------------------ 6.35s 2026-04-05 06:36:32.052815 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 5.19s 2026-04-05 06:36:32.052828 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 5.03s 2026-04-05 06:36:32.052850 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.32s 2026-04-05 06:36:32.052861 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.32s 2026-04-05 06:36:32.052872 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.83s 2026-04-05 06:36:32.052882 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.72s 2026-04-05 06:36:32.052893 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.53s 2026-04-05 06:36:32.052903 | orchestrator | placement : Creating placement databases -------------------------------- 3.06s 2026-04-05 06:36:32.052914 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.92s 2026-04-05 06:36:32.052924 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.54s 2026-04-05 06:36:32.052936 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.50s 2026-04-05 06:36:32.052946 | orchestrator | placement : Copying over config.json files for services ----------------- 2.50s 2026-04-05 06:36:32.052957 | orchestrator | service-check-containers : placement | Check containers ----------------- 2.44s 2026-04-05 06:36:32.052968 | orchestrator | placement : include_tasks ----------------------------------------------- 2.20s 2026-04-05 06:36:32.052979 | orchestrator | placement : Copying over existing policy file --------------------------- 2.14s 2026-04-05 06:36:32.253686 | orchestrator | + osism apply -a upgrade neutron 2026-04-05 06:36:33.529085 | orchestrator | 2026-04-05 06:36:33 | INFO  | Prepare task for execution of neutron. 2026-04-05 06:36:33.593806 | orchestrator | 2026-04-05 06:36:33 | INFO  | Task 78686ee0-968e-4417-be8e-8c9b457be439 (neutron) was prepared for execution. 2026-04-05 06:36:33.593915 | orchestrator | 2026-04-05 06:36:33 | INFO  | It takes a moment until task 78686ee0-968e-4417-be8e-8c9b457be439 (neutron) has been started and output is visible here. 2026-04-05 06:36:57.591908 | orchestrator | 2026-04-05 06:36:57.592023 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 06:36:57.592040 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 06:36:57.592054 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 06:36:57.592092 | orchestrator | 2026-04-05 06:36:57.592103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 06:36:57.592113 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 06:36:57.592124 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 06:36:57.592146 | orchestrator | Sunday 05 April 2026 06:36:38 +0000 (0:00:01.203) 0:00:01.203 ********** 2026-04-05 06:36:57.592158 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:36:57.592170 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:36:57.592180 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:36:57.592191 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:36:57.592201 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:36:57.592212 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:36:57.592223 | orchestrator | 2026-04-05 06:36:57.592233 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 06:36:57.592244 | orchestrator | Sunday 05 April 2026 06:36:39 +0000 (0:00:01.543) 0:00:02.747 ********** 2026-04-05 06:36:57.592255 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-05 06:36:57.592266 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-05 06:36:57.592277 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-05 06:36:57.592287 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-05 06:36:57.592298 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-05 06:36:57.592371 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-05 06:36:57.592396 | orchestrator | 2026-04-05 06:36:57.592424 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-05 06:36:57.592441 | orchestrator | 2026-04-05 06:36:57.592458 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 06:36:57.592475 | orchestrator | Sunday 05 April 2026 06:36:40 +0000 (0:00:01.235) 0:00:03.982 ********** 2026-04-05 06:36:57.592494 | orchestrator | included: /ansible/roles/neutron/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:36:57.592511 | orchestrator | 2026-04-05 06:36:57.592529 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-05 06:36:57.592547 | orchestrator | Sunday 05 April 2026 06:36:42 +0000 (0:00:01.740) 0:00:05.723 ********** 2026-04-05 06:36:57.592564 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:36:57.592581 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:36:57.592600 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:36:57.592618 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:36:57.592637 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:36:57.592655 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:36:57.592673 | orchestrator | 2026-04-05 06:36:57.592686 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-05 06:36:57.592696 | orchestrator | Sunday 05 April 2026 06:36:44 +0000 (0:00:02.039) 0:00:07.762 ********** 2026-04-05 06:36:57.592706 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:36:57.592717 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:36:57.592728 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:36:57.592738 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:36:57.592749 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:36:57.592759 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:36:57.592769 | orchestrator | 2026-04-05 06:36:57.592780 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-05 06:36:57.592790 | orchestrator | Sunday 05 April 2026 06:36:46 +0000 (0:00:01.469) 0:00:09.231 ********** 2026-04-05 06:36:57.592802 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 06:36:57.592812 | orchestrator |  "changed": false, 2026-04-05 06:36:57.592823 | orchestrator |  "msg": "All assertions passed" 2026-04-05 06:36:57.592833 | orchestrator | } 2026-04-05 06:36:57.592844 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 06:36:57.592855 | orchestrator |  "changed": false, 2026-04-05 06:36:57.592865 | orchestrator |  "msg": "All assertions passed" 2026-04-05 06:36:57.592876 | orchestrator | } 2026-04-05 06:36:57.592886 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 06:36:57.592897 | orchestrator |  "changed": false, 2026-04-05 06:36:57.592907 | orchestrator |  "msg": "All assertions passed" 2026-04-05 06:36:57.592918 | orchestrator | } 2026-04-05 06:36:57.592928 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 06:36:57.592939 | orchestrator |  "changed": false, 2026-04-05 06:36:57.592949 | orchestrator |  "msg": "All assertions passed" 2026-04-05 06:36:57.592960 | orchestrator | } 2026-04-05 06:36:57.592970 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 06:36:57.592980 | orchestrator |  "changed": false, 2026-04-05 06:36:57.592991 | orchestrator |  "msg": "All assertions passed" 2026-04-05 06:36:57.593001 | orchestrator | } 2026-04-05 06:36:57.593012 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 06:36:57.593022 | orchestrator |  "changed": false, 2026-04-05 06:36:57.593033 | orchestrator |  "msg": "All assertions passed" 2026-04-05 06:36:57.593043 | orchestrator | } 2026-04-05 06:36:57.593054 | orchestrator | 2026-04-05 06:36:57.593064 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-05 06:36:57.593075 | orchestrator | Sunday 05 April 2026 06:36:47 +0000 (0:00:00.833) 0:00:10.065 ********** 2026-04-05 06:36:57.593085 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:36:57.593096 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:36:57.593106 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:36:57.593117 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:36:57.593139 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:36:57.593149 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:36:57.593160 | orchestrator | 2026-04-05 06:36:57.593178 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 06:36:57.593205 | orchestrator | Sunday 05 April 2026 06:36:48 +0000 (0:00:01.076) 0:00:11.141 ********** 2026-04-05 06:36:57.593250 | orchestrator | included: /ansible/roles/neutron/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:36:57.593271 | orchestrator | 2026-04-05 06:36:57.593287 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-05 06:36:57.593306 | orchestrator | Sunday 05 April 2026 06:36:49 +0000 (0:00:01.672) 0:00:12.814 ********** 2026-04-05 06:36:57.593378 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:36:57.593399 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:36:57.593416 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:36:57.593433 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:36:57.593444 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:36:57.593454 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:36:57.593465 | orchestrator | 2026-04-05 06:36:57.593476 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-05 06:36:57.593487 | orchestrator | Sunday 05 April 2026 06:36:52 +0000 (0:00:02.357) 0:00:15.171 ********** 2026-04-05 06:36:57.593497 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:36:57.593508 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:36:57.593519 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:36:57.593529 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:36:57.593540 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:36:57.593550 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:36:57.593561 | orchestrator | 2026-04-05 06:36:57.593572 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-05 06:36:57.593582 | orchestrator | Sunday 05 April 2026 06:36:53 +0000 (0:00:00.954) 0:00:16.126 ********** 2026-04-05 06:36:57.593593 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:36:57.593604 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:36:57.593614 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:36:57.593625 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:36:57.593635 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:36:57.593646 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:36:57.593656 | orchestrator | 2026-04-05 06:36:57.593667 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-05 06:36:57.593678 | orchestrator | Sunday 05 April 2026 06:36:55 +0000 (0:00:02.538) 0:00:18.664 ********** 2026-04-05 06:36:57.593695 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:36:57.593712 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:36:57.593753 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:03.847809 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:03.847932 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:03.847949 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:03.847984 | orchestrator | 2026-04-05 06:37:03.847996 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-05 06:37:03.848007 | orchestrator | Sunday 05 April 2026 06:36:58 +0000 (0:00:02.406) 0:00:21.071 ********** 2026-04-05 06:37:03.848016 | orchestrator | [WARNING]: Skipped 2026-04-05 06:37:03.848025 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-05 06:37:03.848035 | orchestrator | due to this access issue: 2026-04-05 06:37:03.848045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-05 06:37:03.848053 | orchestrator | a directory 2026-04-05 06:37:03.848062 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 06:37:03.848071 | orchestrator | 2026-04-05 06:37:03.848080 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 06:37:03.848088 | orchestrator | Sunday 05 April 2026 06:36:59 +0000 (0:00:01.176) 0:00:22.248 ********** 2026-04-05 06:37:03.848097 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:37:03.848107 | orchestrator | 2026-04-05 06:37:03.848116 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-05 06:37:03.848124 | orchestrator | Sunday 05 April 2026 06:37:01 +0000 (0:00:01.898) 0:00:24.146 ********** 2026-04-05 06:37:03.848164 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:03.848178 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:03.848188 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:03.848206 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:03.848216 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:03.848243 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:07.767829 | orchestrator | 2026-04-05 06:37:07.767931 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-05 06:37:07.767946 | orchestrator | Sunday 05 April 2026 06:37:03 +0000 (0:00:02.808) 0:00:26.955 ********** 2026-04-05 06:37:07.767963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:07.768002 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:07.768014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:07.768053 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:07.768065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:07.768075 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:07.768115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:07.768128 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:07.768138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:07.768155 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:07.768165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:07.768175 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:07.768185 | orchestrator | 2026-04-05 06:37:07.768195 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-05 06:37:07.768205 | orchestrator | Sunday 05 April 2026 06:37:06 +0000 (0:00:02.367) 0:00:29.323 ********** 2026-04-05 06:37:07.768215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:07.768225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:07.768240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:07.768251 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:07.768269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:14.421809 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:14.421929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:14.421950 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:14.421963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:14.421974 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:14.421985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:14.421997 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:14.422009 | orchestrator | 2026-04-05 06:37:14.422079 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-05 06:37:14.422108 | orchestrator | Sunday 05 April 2026 06:37:09 +0000 (0:00:02.947) 0:00:32.270 ********** 2026-04-05 06:37:14.422120 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:14.422131 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:14.422142 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:14.422153 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:14.422163 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:14.422184 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:14.422194 | orchestrator | 2026-04-05 06:37:14.422206 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-05 06:37:14.422217 | orchestrator | Sunday 05 April 2026 06:37:11 +0000 (0:00:02.206) 0:00:34.476 ********** 2026-04-05 06:37:14.422253 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:14.422264 | orchestrator | 2026-04-05 06:37:14.422275 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-05 06:37:14.422285 | orchestrator | Sunday 05 April 2026 06:37:11 +0000 (0:00:00.132) 0:00:34.609 ********** 2026-04-05 06:37:14.422296 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:14.422357 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:14.422371 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:14.422383 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:14.422396 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:14.422408 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:14.422421 | orchestrator | 2026-04-05 06:37:14.422434 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-05 06:37:14.422446 | orchestrator | Sunday 05 April 2026 06:37:12 +0000 (0:00:00.813) 0:00:35.423 ********** 2026-04-05 06:37:14.422481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:14.422497 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:14.422511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:14.422525 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:14.422544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:14.422567 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:14.422580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:14.422594 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:14.422617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:22.811488 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:22.811638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:22.811658 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:22.811670 | orchestrator | 2026-04-05 06:37:22.811682 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-05 06:37:22.811695 | orchestrator | Sunday 05 April 2026 06:37:14 +0000 (0:00:02.326) 0:00:37.749 ********** 2026-04-05 06:37:22.811709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:22.811774 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:22.811789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:22.811822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:22.811835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:22.811848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:22.811868 | orchestrator | 2026-04-05 06:37:22.811885 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-05 06:37:22.811896 | orchestrator | Sunday 05 April 2026 06:37:17 +0000 (0:00:03.109) 0:00:40.858 ********** 2026-04-05 06:37:22.811908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:22.811929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:25.513491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:25.513608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:25.513655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:25.513785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:37:25.513808 | orchestrator | 2026-04-05 06:37:25.513822 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-05 06:37:25.513834 | orchestrator | Sunday 05 April 2026 06:37:23 +0000 (0:00:05.885) 0:00:46.744 ********** 2026-04-05 06:37:25.513868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:25.513882 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:25.513895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:25.513915 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:25.513934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:25.513949 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:25.513963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:25.513977 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:25.513998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:39.667599 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:39.667740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:39.667786 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:39.667814 | orchestrator | 2026-04-05 06:37:39.668674 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-05 06:37:39.668706 | orchestrator | Sunday 05 April 2026 06:37:25 +0000 (0:00:02.088) 0:00:48.833 ********** 2026-04-05 06:37:39.668717 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:39.668728 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:39.668739 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:39.668751 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:37:39.668763 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:37:39.668773 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:37:39.668784 | orchestrator | 2026-04-05 06:37:39.668795 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-05 06:37:39.668807 | orchestrator | Sunday 05 April 2026 06:37:28 +0000 (0:00:02.813) 0:00:51.646 ********** 2026-04-05 06:37:39.668836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:39.668850 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:39.668862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:39.668874 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:39.668886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:39.668897 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:39.668934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:39.668964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:39.668983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:37:39.668996 | orchestrator | 2026-04-05 06:37:39.669007 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-05 06:37:39.669018 | orchestrator | Sunday 05 April 2026 06:37:32 +0000 (0:00:04.045) 0:00:55.692 ********** 2026-04-05 06:37:39.669029 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:39.669040 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:39.669051 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:39.669062 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:39.669072 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:39.669083 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:39.669093 | orchestrator | 2026-04-05 06:37:39.669104 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-05 06:37:39.669115 | orchestrator | Sunday 05 April 2026 06:37:34 +0000 (0:00:02.268) 0:00:57.961 ********** 2026-04-05 06:37:39.669126 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:39.669137 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:39.669147 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:39.669158 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:39.669168 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:39.669190 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:39.669201 | orchestrator | 2026-04-05 06:37:39.669212 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-05 06:37:39.669222 | orchestrator | Sunday 05 April 2026 06:37:37 +0000 (0:00:02.255) 0:01:00.216 ********** 2026-04-05 06:37:39.669233 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:39.669244 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:39.669254 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:39.669265 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:39.669276 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:39.669325 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:39.669347 | orchestrator | 2026-04-05 06:37:39.669365 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-05 06:37:39.669391 | orchestrator | Sunday 05 April 2026 06:37:39 +0000 (0:00:02.470) 0:01:02.687 ********** 2026-04-05 06:37:50.695520 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:50.695652 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:50.695669 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:50.695681 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:50.695692 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:50.695703 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:50.695775 | orchestrator | 2026-04-05 06:37:50.695792 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-05 06:37:50.695804 | orchestrator | Sunday 05 April 2026 06:37:42 +0000 (0:00:02.460) 0:01:05.148 ********** 2026-04-05 06:37:50.695815 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:50.695826 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:50.695837 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:50.695847 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:50.695858 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:50.695869 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:50.695880 | orchestrator | 2026-04-05 06:37:50.695891 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-05 06:37:50.695902 | orchestrator | Sunday 05 April 2026 06:37:44 +0000 (0:00:02.483) 0:01:07.631 ********** 2026-04-05 06:37:50.695913 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 06:37:50.695924 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:50.695935 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 06:37:50.695946 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:50.695957 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 06:37:50.695967 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:50.695978 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 06:37:50.695989 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:50.696000 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 06:37:50.696011 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:50.696021 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-05 06:37:50.696032 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:50.696043 | orchestrator | 2026-04-05 06:37:50.696055 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-05 06:37:50.696068 | orchestrator | Sunday 05 April 2026 06:37:47 +0000 (0:00:02.478) 0:01:10.109 ********** 2026-04-05 06:37:50.696103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:50.696144 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:37:50.696159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:50.696173 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:37:50.696207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:37:50.696222 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:37:50.696235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:50.696250 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:37:50.696270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:50.696317 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:37:50.696331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:37:50.696344 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:37:50.696357 | orchestrator | 2026-04-05 06:37:50.696371 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-05 06:37:50.696385 | orchestrator | Sunday 05 April 2026 06:37:49 +0000 (0:00:02.134) 0:01:12.244 ********** 2026-04-05 06:37:50.696408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:14.841172 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.841323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:14.841347 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.841377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:14.841412 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.841425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:38:14.841438 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.841449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:38:14.841460 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.841492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:38:14.841504 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.841515 | orchestrator | 2026-04-05 06:38:14.841527 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-05 06:38:14.841540 | orchestrator | Sunday 05 April 2026 06:37:51 +0000 (0:00:02.599) 0:01:14.844 ********** 2026-04-05 06:38:14.841551 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.841561 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.841572 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.841590 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.841601 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.841612 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.841622 | orchestrator | 2026-04-05 06:38:14.841633 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-05 06:38:14.841644 | orchestrator | Sunday 05 April 2026 06:37:53 +0000 (0:00:01.846) 0:01:16.690 ********** 2026-04-05 06:38:14.841655 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.841665 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.841676 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.841687 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:38:14.841700 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:38:14.841712 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:38:14.841725 | orchestrator | 2026-04-05 06:38:14.841745 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-05 06:38:14.841758 | orchestrator | Sunday 05 April 2026 06:37:58 +0000 (0:00:04.591) 0:01:21.282 ********** 2026-04-05 06:38:14.841771 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.841784 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.841796 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.841806 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.841817 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.841827 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.841838 | orchestrator | 2026-04-05 06:38:14.841849 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-05 06:38:14.841859 | orchestrator | Sunday 05 April 2026 06:38:00 +0000 (0:00:02.388) 0:01:23.671 ********** 2026-04-05 06:38:14.841870 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.841881 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.841891 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.841902 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.841912 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.841923 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.841933 | orchestrator | 2026-04-05 06:38:14.841944 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-05 06:38:14.841955 | orchestrator | Sunday 05 April 2026 06:38:02 +0000 (0:00:02.269) 0:01:25.940 ********** 2026-04-05 06:38:14.841979 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.841990 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.842000 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.842011 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.842084 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.842096 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.842106 | orchestrator | 2026-04-05 06:38:14.842117 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-05 06:38:14.842128 | orchestrator | Sunday 05 April 2026 06:38:05 +0000 (0:00:02.436) 0:01:28.377 ********** 2026-04-05 06:38:14.842139 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.842149 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.842160 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.842170 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.842181 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.842191 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.842202 | orchestrator | 2026-04-05 06:38:14.842212 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-05 06:38:14.842223 | orchestrator | Sunday 05 April 2026 06:38:07 +0000 (0:00:02.250) 0:01:30.628 ********** 2026-04-05 06:38:14.842234 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.842244 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.842254 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.842285 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.842296 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.842307 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.842327 | orchestrator | 2026-04-05 06:38:14.842338 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-05 06:38:14.842348 | orchestrator | Sunday 05 April 2026 06:38:09 +0000 (0:00:02.242) 0:01:32.870 ********** 2026-04-05 06:38:14.842359 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.842370 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.842381 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.842391 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.842402 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.842412 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.842423 | orchestrator | 2026-04-05 06:38:14.842434 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-05 06:38:14.842445 | orchestrator | Sunday 05 April 2026 06:38:12 +0000 (0:00:02.279) 0:01:35.149 ********** 2026-04-05 06:38:14.842455 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:14.842466 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:14.842477 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:14.842487 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:14.842498 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:14.842508 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:14.842519 | orchestrator | 2026-04-05 06:38:14.842539 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-05 06:38:21.923778 | orchestrator | Sunday 05 April 2026 06:38:14 +0000 (0:00:02.710) 0:01:37.860 ********** 2026-04-05 06:38:21.923887 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 06:38:21.923904 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:21.923917 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 06:38:21.923928 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:21.923940 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 06:38:21.923951 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:21.923962 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 06:38:21.923972 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:21.923983 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 06:38:21.923994 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:21.924005 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-05 06:38:21.924016 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:21.924027 | orchestrator | 2026-04-05 06:38:21.924039 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-05 06:38:21.924050 | orchestrator | Sunday 05 April 2026 06:38:16 +0000 (0:00:02.013) 0:01:39.874 ********** 2026-04-05 06:38:21.924084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:21.924121 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:21.924134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:21.924147 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:21.924177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:21.924190 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:38:21.924202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:38:21.924214 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:21.924230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:38:21.924242 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:38:21.924284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:38:21.924308 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:38:21.924322 | orchestrator | 2026-04-05 06:38:21.924335 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-05 06:38:21.924348 | orchestrator | Sunday 05 April 2026 06:38:19 +0000 (0:00:02.632) 0:01:42.507 ********** 2026-04-05 06:38:21.924361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:38:21.924398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:38:24.878962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-05 06:38:24.879114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:38:24.879176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:38:24.879192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:38:24.879205 | orchestrator | 2026-04-05 06:38:24.879218 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-05 06:38:24.879231 | orchestrator | Sunday 05 April 2026 06:38:22 +0000 (0:00:02.910) 0:01:45.417 ********** 2026-04-05 06:38:24.879244 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 06:38:24.879307 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:38:24.879340 | orchestrator | } 2026-04-05 06:38:24.879352 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 06:38:24.879363 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:38:24.879374 | orchestrator | } 2026-04-05 06:38:24.879385 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 06:38:24.879395 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:38:24.879406 | orchestrator | } 2026-04-05 06:38:24.879417 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 06:38:24.879427 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:38:24.879438 | orchestrator | } 2026-04-05 06:38:24.879449 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 06:38:24.879459 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:38:24.879472 | orchestrator | } 2026-04-05 06:38:24.879484 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 06:38:24.879497 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:38:24.879509 | orchestrator | } 2026-04-05 06:38:24.879522 | orchestrator | 2026-04-05 06:38:24.879535 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 06:38:24.879548 | orchestrator | Sunday 05 April 2026 06:38:23 +0000 (0:00:00.899) 0:01:46.317 ********** 2026-04-05 06:38:24.879577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:24.879593 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:38:24.879606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:38:24.879620 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:38:24.879633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:38:24.879646 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:38:24.879670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:41:22.354636 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:41:22.354787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:41:22.354809 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:41:22.354824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-05 06:41:22.354839 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:41:22.354853 | orchestrator | 2026-04-05 06:41:22.354867 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 06:41:22.354883 | orchestrator | Sunday 05 April 2026 06:38:26 +0000 (0:00:03.321) 0:01:49.638 ********** 2026-04-05 06:41:22.354896 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:41:22.354909 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:41:22.354922 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:41:22.354936 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:41:22.354949 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:41:22.354962 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:41:22.354976 | orchestrator | 2026-04-05 06:41:22.354989 | orchestrator | TASK [neutron : Running Neutron database expand container] ********************* 2026-04-05 06:41:22.355001 | orchestrator | Sunday 05 April 2026 06:38:27 +0000 (0:00:00.634) 0:01:50.273 ********** 2026-04-05 06:41:22.355009 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:41:22.355017 | orchestrator | 2026-04-05 06:41:22.355025 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355033 | orchestrator | Sunday 05 April 2026 06:39:02 +0000 (0:00:35.123) 0:02:25.396 ********** 2026-04-05 06:41:22.355041 | orchestrator | 2026-04-05 06:41:22.355049 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355056 | orchestrator | Sunday 05 April 2026 06:39:02 +0000 (0:00:00.254) 0:02:25.651 ********** 2026-04-05 06:41:22.355064 | orchestrator | 2026-04-05 06:41:22.355072 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355080 | orchestrator | Sunday 05 April 2026 06:39:02 +0000 (0:00:00.075) 0:02:25.726 ********** 2026-04-05 06:41:22.355087 | orchestrator | 2026-04-05 06:41:22.355095 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355103 | orchestrator | Sunday 05 April 2026 06:39:02 +0000 (0:00:00.089) 0:02:25.815 ********** 2026-04-05 06:41:22.355111 | orchestrator | 2026-04-05 06:41:22.355119 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355183 | orchestrator | Sunday 05 April 2026 06:39:02 +0000 (0:00:00.073) 0:02:25.889 ********** 2026-04-05 06:41:22.355194 | orchestrator | 2026-04-05 06:41:22.355203 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355212 | orchestrator | Sunday 05 April 2026 06:39:02 +0000 (0:00:00.075) 0:02:25.965 ********** 2026-04-05 06:41:22.355222 | orchestrator | 2026-04-05 06:41:22.355231 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-05 06:41:22.355241 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-05 06:41:22.355251 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-05 06:41:22.355272 | orchestrator | Sunday 05 April 2026 06:39:03 +0000 (0:00:00.073) 0:02:26.038 ********** 2026-04-05 06:41:22.355285 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:41:22.355299 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:41:22.355313 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:41:22.355326 | orchestrator | 2026-04-05 06:41:22.355338 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-05 06:41:22.355349 | orchestrator | Sunday 05 April 2026 06:39:51 +0000 (0:00:48.075) 0:03:14.114 ********** 2026-04-05 06:41:22.355358 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:41:22.355367 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:41:22.355377 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:41:22.355386 | orchestrator | 2026-04-05 06:41:22.355412 | orchestrator | TASK [neutron : Checking neutron pending contract scripts] ********************* 2026-04-05 06:41:22.355421 | orchestrator | Sunday 05 April 2026 06:40:58 +0000 (0:01:07.200) 0:04:21.314 ********** 2026-04-05 06:41:22.355431 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:41:22.355440 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:41:22.355449 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:41:22.355458 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:41:22.355468 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:41:22.355477 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:41:22.355487 | orchestrator | 2026-04-05 06:41:22.355496 | orchestrator | TASK [neutron : Stopping all neutron-server for contract db] ******************* 2026-04-05 06:41:22.355505 | orchestrator | Sunday 05 April 2026 06:40:58 +0000 (0:00:00.663) 0:04:21.978 ********** 2026-04-05 06:41:22.355522 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:41:22.355531 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:41:22.355541 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:41:22.355549 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:41:22.355557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:41:22.355564 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:41:22.355572 | orchestrator | 2026-04-05 06:41:22.355580 | orchestrator | TASK [neutron : Running Neutron database contract container] ******************* 2026-04-05 06:41:22.355588 | orchestrator | Sunday 05 April 2026 06:41:04 +0000 (0:00:05.093) 0:04:27.072 ********** 2026-04-05 06:41:22.355595 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:41:22.355603 | orchestrator | 2026-04-05 06:41:22.355611 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355618 | orchestrator | Sunday 05 April 2026 06:41:18 +0000 (0:00:14.642) 0:04:41.715 ********** 2026-04-05 06:41:22.355626 | orchestrator | 2026-04-05 06:41:22.355634 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355641 | orchestrator | Sunday 05 April 2026 06:41:18 +0000 (0:00:00.085) 0:04:41.800 ********** 2026-04-05 06:41:22.355649 | orchestrator | 2026-04-05 06:41:22.355657 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355665 | orchestrator | Sunday 05 April 2026 06:41:18 +0000 (0:00:00.080) 0:04:41.880 ********** 2026-04-05 06:41:22.355672 | orchestrator | 2026-04-05 06:41:22.355680 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355695 | orchestrator | Sunday 05 April 2026 06:41:18 +0000 (0:00:00.074) 0:04:41.955 ********** 2026-04-05 06:41:22.355703 | orchestrator | 2026-04-05 06:41:22.355711 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355719 | orchestrator | Sunday 05 April 2026 06:41:19 +0000 (0:00:00.072) 0:04:42.028 ********** 2026-04-05 06:41:22.355726 | orchestrator | 2026-04-05 06:41:22.355734 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-05 06:41:22.355742 | orchestrator | Sunday 05 April 2026 06:41:19 +0000 (0:00:00.074) 0:04:42.103 ********** 2026-04-05 06:41:22.355750 | orchestrator | 2026-04-05 06:41:22.355758 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-05 06:41:22.355766 | orchestrator | Sunday 05 April 2026 06:41:19 +0000 (0:00:00.075) 0:04:42.179 ********** 2026-04-05 06:41:22.355774 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:41:22.355781 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:41:22.355789 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:41:22.355797 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:41:22.355805 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:41:22.355812 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:41:22.355820 | orchestrator | 2026-04-05 06:41:22.355828 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 06:41:22.355837 | orchestrator | testbed-node-0 : ok=21  changed=8  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-05 06:41:22.355847 | orchestrator | testbed-node-1 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-05 06:41:22.355855 | orchestrator | testbed-node-2 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-05 06:41:22.355862 | orchestrator | testbed-node-3 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-05 06:41:22.355870 | orchestrator | testbed-node-4 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-05 06:41:22.355878 | orchestrator | testbed-node-5 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-05 06:41:22.355886 | orchestrator | 2026-04-05 06:41:22.355895 | orchestrator | 2026-04-05 06:41:22.355909 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 06:41:22.355922 | orchestrator | Sunday 05 April 2026 06:41:22 +0000 (0:00:03.181) 0:04:45.360 ********** 2026-04-05 06:41:22.355935 | orchestrator | =============================================================================== 2026-04-05 06:41:22.355948 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 67.20s 2026-04-05 06:41:22.355962 | orchestrator | neutron : Restart neutron-server container ----------------------------- 48.08s 2026-04-05 06:41:22.355975 | orchestrator | neutron : Running Neutron database expand container -------------------- 35.12s 2026-04-05 06:41:22.355989 | orchestrator | neutron : Running Neutron database contract container ------------------ 14.64s 2026-04-05 06:41:22.356003 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.89s 2026-04-05 06:41:22.356016 | orchestrator | neutron : Stopping all neutron-server for contract db ------------------- 5.09s 2026-04-05 06:41:22.356033 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.59s 2026-04-05 06:41:22.777468 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.05s 2026-04-05 06:41:22.777573 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.32s 2026-04-05 06:41:22.777589 | orchestrator | neutron : include_tasks ------------------------------------------------- 3.18s 2026-04-05 06:41:22.777601 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.11s 2026-04-05 06:41:22.777644 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.95s 2026-04-05 06:41:22.777671 | orchestrator | service-check-containers : neutron | Check containers ------------------- 2.91s 2026-04-05 06:41:22.777683 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.81s 2026-04-05 06:41:22.777694 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 2.81s 2026-04-05 06:41:22.777704 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.71s 2026-04-05 06:41:22.777715 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.63s 2026-04-05 06:41:22.777725 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 2.60s 2026-04-05 06:41:22.777736 | orchestrator | Setting sysctl values --------------------------------------------------- 2.54s 2026-04-05 06:41:22.777747 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 2.48s 2026-04-05 06:41:22.984473 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 06:41:22.984581 | orchestrator | + osism apply -a reconfigure nova 2026-04-05 06:41:24.285737 | orchestrator | 2026-04-05 06:41:24 | INFO  | Prepare task for execution of nova. 2026-04-05 06:41:24.352032 | orchestrator | 2026-04-05 06:41:24 | INFO  | Task 3d2290eb-8e46-48fa-bb36-f1d0d0eb25b5 (nova) was prepared for execution. 2026-04-05 06:41:24.352196 | orchestrator | 2026-04-05 06:41:24 | INFO  | It takes a moment until task 3d2290eb-8e46-48fa-bb36-f1d0d0eb25b5 (nova) has been started and output is visible here. 2026-04-05 06:43:45.316492 | orchestrator | 2026-04-05 06:43:45.316610 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 06:43:45.316628 | orchestrator | 2026-04-05 06:43:45.316640 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-05 06:43:45.316651 | orchestrator | Sunday 05 April 2026 06:41:30 +0000 (0:00:02.868) 0:00:02.868 ********** 2026-04-05 06:43:45.316662 | orchestrator | changed: [testbed-manager] 2026-04-05 06:43:45.316673 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:43:45.316684 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:43:45.316695 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:43:45.316705 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:43:45.316716 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:43:45.316727 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:43:45.316737 | orchestrator | 2026-04-05 06:43:45.316748 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 06:43:45.316759 | orchestrator | Sunday 05 April 2026 06:41:33 +0000 (0:00:02.527) 0:00:05.396 ********** 2026-04-05 06:43:45.316770 | orchestrator | changed: [testbed-manager] 2026-04-05 06:43:45.316780 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:43:45.316791 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:43:45.316802 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:43:45.316812 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:43:45.316823 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:43:45.316834 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:43:45.316844 | orchestrator | 2026-04-05 06:43:45.316855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 06:43:45.316866 | orchestrator | Sunday 05 April 2026 06:41:35 +0000 (0:00:02.071) 0:00:07.468 ********** 2026-04-05 06:43:45.316877 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-05 06:43:45.316888 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-05 06:43:45.316899 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-05 06:43:45.316910 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-05 06:43:45.316920 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-05 06:43:45.316931 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-05 06:43:45.316942 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-05 06:43:45.316979 | orchestrator | 2026-04-05 06:43:45.316991 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-05 06:43:45.317002 | orchestrator | 2026-04-05 06:43:45.317013 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 06:43:45.317024 | orchestrator | Sunday 05 April 2026 06:41:38 +0000 (0:00:03.366) 0:00:10.834 ********** 2026-04-05 06:43:45.317037 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:43:45.317050 | orchestrator | 2026-04-05 06:43:45.317147 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-05 06:43:45.317164 | orchestrator | Sunday 05 April 2026 06:41:41 +0000 (0:00:03.127) 0:00:13.962 ********** 2026-04-05 06:43:45.317177 | orchestrator | ok: [testbed-node-0] => (item=nova_cell0) 2026-04-05 06:43:45.317188 | orchestrator | ok: [testbed-node-0] => (item=nova_api) 2026-04-05 06:43:45.317199 | orchestrator | 2026-04-05 06:43:45.317210 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-05 06:43:45.317220 | orchestrator | Sunday 05 April 2026 06:41:47 +0000 (0:00:05.365) 0:00:19.328 ********** 2026-04-05 06:43:45.317231 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 06:43:45.317242 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 06:43:45.317253 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317263 | orchestrator | 2026-04-05 06:43:45.317274 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 06:43:45.317285 | orchestrator | Sunday 05 April 2026 06:41:52 +0000 (0:00:05.290) 0:00:24.618 ********** 2026-04-05 06:43:45.317295 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317306 | orchestrator | 2026-04-05 06:43:45.317317 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-05 06:43:45.317328 | orchestrator | Sunday 05 April 2026 06:41:53 +0000 (0:00:01.600) 0:00:26.219 ********** 2026-04-05 06:43:45.317338 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317349 | orchestrator | 2026-04-05 06:43:45.317360 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-05 06:43:45.317370 | orchestrator | Sunday 05 April 2026 06:41:56 +0000 (0:00:02.172) 0:00:28.391 ********** 2026-04-05 06:43:45.317381 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:43:45.317392 | orchestrator | 2026-04-05 06:43:45.317417 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 06:43:45.317428 | orchestrator | Sunday 05 April 2026 06:42:00 +0000 (0:00:03.917) 0:00:32.309 ********** 2026-04-05 06:43:45.317439 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:43:45.317450 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.317461 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.317472 | orchestrator | 2026-04-05 06:43:45.317482 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 06:43:45.317493 | orchestrator | Sunday 05 April 2026 06:42:01 +0000 (0:00:01.765) 0:00:34.075 ********** 2026-04-05 06:43:45.317504 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317515 | orchestrator | 2026-04-05 06:43:45.317526 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-05 06:43:45.317537 | orchestrator | Sunday 05 April 2026 06:42:35 +0000 (0:00:33.482) 0:01:07.557 ********** 2026-04-05 06:43:45.317547 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317558 | orchestrator | 2026-04-05 06:43:45.317569 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 06:43:45.317579 | orchestrator | Sunday 05 April 2026 06:42:50 +0000 (0:00:15.660) 0:01:23.218 ********** 2026-04-05 06:43:45.317590 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317601 | orchestrator | 2026-04-05 06:43:45.317612 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 06:43:45.317622 | orchestrator | Sunday 05 April 2026 06:43:06 +0000 (0:00:15.393) 0:01:38.612 ********** 2026-04-05 06:43:45.317633 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317644 | orchestrator | 2026-04-05 06:43:45.317672 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-05 06:43:45.317692 | orchestrator | Sunday 05 April 2026 06:43:08 +0000 (0:00:02.013) 0:01:40.625 ********** 2026-04-05 06:43:45.317703 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:43:45.317713 | orchestrator | 2026-04-05 06:43:45.317724 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 06:43:45.317735 | orchestrator | Sunday 05 April 2026 06:43:10 +0000 (0:00:01.607) 0:01:42.233 ********** 2026-04-05 06:43:45.317746 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:43:45.317757 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.317767 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.317778 | orchestrator | 2026-04-05 06:43:45.317789 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 06:43:45.317799 | orchestrator | Sunday 05 April 2026 06:43:11 +0000 (0:00:01.449) 0:01:43.682 ********** 2026-04-05 06:43:45.317810 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:43:45.317821 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.317832 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.317842 | orchestrator | 2026-04-05 06:43:45.317853 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-05 06:43:45.317863 | orchestrator | 2026-04-05 06:43:45.317874 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 06:43:45.317885 | orchestrator | Sunday 05 April 2026 06:43:13 +0000 (0:00:01.619) 0:01:45.302 ********** 2026-04-05 06:43:45.317895 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:43:45.317906 | orchestrator | 2026-04-05 06:43:45.317917 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-05 06:43:45.317927 | orchestrator | Sunday 05 April 2026 06:43:14 +0000 (0:00:01.740) 0:01:47.043 ********** 2026-04-05 06:43:45.317938 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.317949 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.317959 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.317970 | orchestrator | 2026-04-05 06:43:45.317981 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-05 06:43:45.317991 | orchestrator | Sunday 05 April 2026 06:43:17 +0000 (0:00:02.976) 0:01:50.020 ********** 2026-04-05 06:43:45.318002 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.318013 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.318123 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.318144 | orchestrator | 2026-04-05 06:43:45.318162 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 06:43:45.318179 | orchestrator | Sunday 05 April 2026 06:43:21 +0000 (0:00:03.518) 0:01:53.538 ********** 2026-04-05 06:43:45.318190 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-05 06:43:45.318201 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.318212 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-05 06:43:45.318222 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.318233 | orchestrator | ok: [testbed-node-0] => (item=openstack) 2026-04-05 06:43:45.318243 | orchestrator | 2026-04-05 06:43:45.318254 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 06:43:45.318265 | orchestrator | Sunday 05 April 2026 06:43:26 +0000 (0:00:04.751) 0:01:58.289 ********** 2026-04-05 06:43:45.318275 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 06:43:45.318286 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.318296 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 06:43:45.318307 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.318317 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-05 06:43:45.318328 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-05 06:43:45.318339 | orchestrator | 2026-04-05 06:43:45.318349 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-05 06:43:45.318360 | orchestrator | Sunday 05 April 2026 06:43:37 +0000 (0:00:11.868) 0:02:10.158 ********** 2026-04-05 06:43:45.318380 | orchestrator | skipping: [testbed-node-0] => (item=openstack)  2026-04-05 06:43:45.318391 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:43:45.318402 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-05 06:43:45.318412 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.318423 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-05 06:43:45.318433 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.318444 | orchestrator | 2026-04-05 06:43:45.318455 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-05 06:43:45.318472 | orchestrator | Sunday 05 April 2026 06:43:39 +0000 (0:00:01.509) 0:02:11.668 ********** 2026-04-05 06:43:45.318483 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-05 06:43:45.318493 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:43:45.318504 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-05 06:43:45.318514 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.318525 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-05 06:43:45.318536 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.318546 | orchestrator | 2026-04-05 06:43:45.318557 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 06:43:45.318568 | orchestrator | Sunday 05 April 2026 06:43:41 +0000 (0:00:01.997) 0:02:13.665 ********** 2026-04-05 06:43:45.318578 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.318589 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.318600 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.318610 | orchestrator | 2026-04-05 06:43:45.318621 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-05 06:43:45.318632 | orchestrator | Sunday 05 April 2026 06:43:43 +0000 (0:00:01.632) 0:02:15.298 ********** 2026-04-05 06:43:45.318642 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:43:45.318653 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:43:45.318663 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:43:45.318674 | orchestrator | 2026-04-05 06:43:45.318685 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-05 06:43:45.318695 | orchestrator | Sunday 05 April 2026 06:43:45 +0000 (0:00:02.019) 0:02:17.318 ********** 2026-04-05 06:43:45.318715 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:12.723620 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:12.723764 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:45:12.723782 | orchestrator | 2026-04-05 06:45:12.723795 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-05 06:45:12.723807 | orchestrator | Sunday 05 April 2026 06:43:48 +0000 (0:00:03.605) 0:02:20.923 ********** 2026-04-05 06:45:12.723818 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:12.723829 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:12.723840 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:45:12.723851 | orchestrator | 2026-04-05 06:45:12.723863 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 06:45:12.723873 | orchestrator | Sunday 05 April 2026 06:44:01 +0000 (0:00:12.573) 0:02:33.497 ********** 2026-04-05 06:45:12.723884 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:12.723895 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:12.723906 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:45:12.723916 | orchestrator | 2026-04-05 06:45:12.723927 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 06:45:12.723938 | orchestrator | Sunday 05 April 2026 06:44:14 +0000 (0:00:13.303) 0:02:46.801 ********** 2026-04-05 06:45:12.723948 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:45:12.723959 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:12.723969 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:12.723980 | orchestrator | 2026-04-05 06:45:12.723991 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-05 06:45:12.724002 | orchestrator | Sunday 05 April 2026 06:44:16 +0000 (0:00:02.409) 0:02:49.210 ********** 2026-04-05 06:45:12.724039 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:12.724051 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:12.724061 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:12.724072 | orchestrator | 2026-04-05 06:45:12.724083 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-05 06:45:12.724093 | orchestrator | Sunday 05 April 2026 06:44:18 +0000 (0:00:01.943) 0:02:51.154 ********** 2026-04-05 06:45:12.724129 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:12.724141 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:12.724154 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:45:12.724166 | orchestrator | 2026-04-05 06:45:12.724178 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 06:45:12.724191 | orchestrator | Sunday 05 April 2026 06:44:32 +0000 (0:00:13.834) 0:03:04.989 ********** 2026-04-05 06:45:12.724203 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:12.724215 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:12.724227 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:12.724239 | orchestrator | 2026-04-05 06:45:12.724251 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-05 06:45:12.724263 | orchestrator | 2026-04-05 06:45:12.724275 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 06:45:12.724287 | orchestrator | Sunday 05 April 2026 06:44:34 +0000 (0:00:01.808) 0:03:06.798 ********** 2026-04-05 06:45:12.724300 | orchestrator | included: /ansible/roles/nova/tasks/reconfigure.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:45:12.724313 | orchestrator | 2026-04-05 06:45:12.724326 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-05 06:45:12.724339 | orchestrator | Sunday 05 April 2026 06:44:36 +0000 (0:00:02.014) 0:03:08.812 ********** 2026-04-05 06:45:12.724349 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-05 06:45:12.724360 | orchestrator | ok: [testbed-node-0] => (item=nova (compute)) 2026-04-05 06:45:12.724370 | orchestrator | 2026-04-05 06:45:12.724380 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-05 06:45:12.724391 | orchestrator | Sunday 05 April 2026 06:44:41 +0000 (0:00:04.432) 0:03:13.245 ********** 2026-04-05 06:45:12.724401 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-05 06:45:12.724414 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-05 06:45:12.724424 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-05 06:45:12.724435 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-05 06:45:12.724446 | orchestrator | 2026-04-05 06:45:12.724470 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-05 06:45:12.724482 | orchestrator | Sunday 05 April 2026 06:44:48 +0000 (0:00:07.468) 0:03:20.713 ********** 2026-04-05 06:45:12.724492 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-05 06:45:12.724503 | orchestrator | 2026-04-05 06:45:12.724513 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-05 06:45:12.724524 | orchestrator | Sunday 05 April 2026 06:44:52 +0000 (0:00:04.268) 0:03:24.982 ********** 2026-04-05 06:45:12.724534 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-05 06:45:12.724568 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-05 06:45:12.724591 | orchestrator | 2026-04-05 06:45:12.724603 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-05 06:45:12.724613 | orchestrator | Sunday 05 April 2026 06:44:58 +0000 (0:00:05.861) 0:03:30.843 ********** 2026-04-05 06:45:12.724624 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-05 06:45:12.724635 | orchestrator | 2026-04-05 06:45:12.724645 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-05 06:45:12.724665 | orchestrator | Sunday 05 April 2026 06:45:02 +0000 (0:00:04.174) 0:03:35.017 ********** 2026-04-05 06:45:12.724676 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-05 06:45:12.724687 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> service) 2026-04-05 06:45:12.724697 | orchestrator | 2026-04-05 06:45:12.724724 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 06:45:12.724736 | orchestrator | Sunday 05 April 2026 06:45:11 +0000 (0:00:08.348) 0:03:43.366 ********** 2026-04-05 06:45:12.724753 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:12.724770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:12.724789 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:12.724811 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:24.003448 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:24.003578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:24.003597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:24.003626 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:24.003660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:24.003671 | orchestrator | 2026-04-05 06:45:24.003683 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-05 06:45:24.003694 | orchestrator | Sunday 05 April 2026 06:45:14 +0000 (0:00:03.368) 0:03:46.734 ********** 2026-04-05 06:45:24.003720 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:24.003732 | orchestrator | 2026-04-05 06:45:24.003742 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-05 06:45:24.003752 | orchestrator | Sunday 05 April 2026 06:45:15 +0000 (0:00:01.188) 0:03:47.922 ********** 2026-04-05 06:45:24.003762 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:24.003771 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:24.003781 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:24.003791 | orchestrator | 2026-04-05 06:45:24.003800 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-05 06:45:24.003810 | orchestrator | Sunday 05 April 2026 06:45:17 +0000 (0:00:01.380) 0:03:49.303 ********** 2026-04-05 06:45:24.003820 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 06:45:24.003830 | orchestrator | 2026-04-05 06:45:24.003839 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-05 06:45:24.003849 | orchestrator | Sunday 05 April 2026 06:45:19 +0000 (0:00:02.101) 0:03:51.404 ********** 2026-04-05 06:45:24.003858 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:24.003868 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:24.003877 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:24.003887 | orchestrator | 2026-04-05 06:45:24.003896 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 06:45:24.003912 | orchestrator | Sunday 05 April 2026 06:45:20 +0000 (0:00:01.413) 0:03:52.818 ********** 2026-04-05 06:45:24.003936 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:45:24.003960 | orchestrator | 2026-04-05 06:45:24.003978 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 06:45:24.003996 | orchestrator | Sunday 05 April 2026 06:45:22 +0000 (0:00:01.893) 0:03:54.712 ********** 2026-04-05 06:45:24.004017 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:24.004057 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:24.004093 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:27.699908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:27.700016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:27.700071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:27.700087 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:27.700119 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:27.700132 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:27.700144 | orchestrator | 2026-04-05 06:45:27.700157 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 06:45:27.700170 | orchestrator | Sunday 05 April 2026 06:45:26 +0000 (0:00:04.355) 0:03:59.067 ********** 2026-04-05 06:45:27.700269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:27.700301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:27.700315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:45:27.700326 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:27.700352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:29.559682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:29.559835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:45:29.559855 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:29.559869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:29.559883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:29.559914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:45:29.559935 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:29.559947 | orchestrator | 2026-04-05 06:45:29.559959 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 06:45:29.559971 | orchestrator | Sunday 05 April 2026 06:45:28 +0000 (0:00:02.060) 0:04:01.128 ********** 2026-04-05 06:45:29.559987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:29.560000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:29.560012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:45:29.560024 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:29.560044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:33.102668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:33.102824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:45:33.102840 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:45:33.102854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:33.102866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:33.102928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:45:33.102940 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:45:33.102951 | orchestrator | 2026-04-05 06:45:33.102962 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-05 06:45:33.102973 | orchestrator | Sunday 05 April 2026 06:45:30 +0000 (0:00:01.966) 0:04:03.094 ********** 2026-04-05 06:45:33.102989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:33.103000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:33.103011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:33.103040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:41.680172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:41.680357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:41.680390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:41.680441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:41.680464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:41.680485 | orchestrator | 2026-04-05 06:45:41.680507 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-05 06:45:41.680553 | orchestrator | Sunday 05 April 2026 06:45:35 +0000 (0:00:04.891) 0:04:07.986 ********** 2026-04-05 06:45:41.680591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:41.680620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:41.680641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:41.680688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:46.550646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:46.550751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:45:46.550793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:46.550807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:46.550819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:45:46.550831 | orchestrator | 2026-04-05 06:45:46.550843 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-05 06:45:46.550871 | orchestrator | Sunday 05 April 2026 06:45:45 +0000 (0:00:10.214) 0:04:18.201 ********** 2026-04-05 06:45:46.550891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:46.550904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:46.550925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:45:46.550937 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:45:46.550949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:45:46.550974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:46:04.700855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:46:04.700965 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:46:04.701018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:46:04.701035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:46:04.701064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:46:04.701077 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:46:04.701088 | orchestrator | 2026-04-05 06:46:04.701100 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-05 06:46:04.701112 | orchestrator | Sunday 05 April 2026 06:45:47 +0000 (0:00:01.833) 0:04:20.035 ********** 2026-04-05 06:46:04.701123 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:46:04.701134 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:46:04.701158 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:46:04.701170 | orchestrator | 2026-04-05 06:46:04.701181 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-05 06:46:04.701192 | orchestrator | Sunday 05 April 2026 06:45:49 +0000 (0:00:01.744) 0:04:21.780 ********** 2026-04-05 06:46:04.701203 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:46:04.701213 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:46:04.701224 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:46:04.701235 | orchestrator | 2026-04-05 06:46:04.701246 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-05 06:46:04.701274 | orchestrator | Sunday 05 April 2026 06:45:51 +0000 (0:00:02.122) 0:04:23.902 ********** 2026-04-05 06:46:04.701296 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-05 06:46:04.701308 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-05 06:46:04.701318 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:46:04.701329 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-05 06:46:04.701340 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-05 06:46:04.701426 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:46:04.701440 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-05 06:46:04.701453 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-05 06:46:04.701466 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:46:04.701479 | orchestrator | 2026-04-05 06:46:04.701492 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-05 06:46:04.701505 | orchestrator | Sunday 05 April 2026 06:45:53 +0000 (0:00:01.683) 0:04:25.586 ********** 2026-04-05 06:46:04.701519 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-05 06:46:04.701534 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-05 06:46:04.701547 | orchestrator | 2026-04-05 06:46:04.701560 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-05 06:46:04.701573 | orchestrator | Sunday 05 April 2026 06:45:56 +0000 (0:00:02.673) 0:04:28.260 ********** 2026-04-05 06:46:04.701586 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:46:04.701599 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:46:04.701610 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:46:04.701621 | orchestrator | 2026-04-05 06:46:04.701632 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-05 06:46:04.701642 | orchestrator | Sunday 05 April 2026 06:45:59 +0000 (0:00:03.399) 0:04:31.660 ********** 2026-04-05 06:46:04.701653 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:46:04.701664 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:46:04.701674 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:46:04.701685 | orchestrator | 2026-04-05 06:46:04.701696 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-05 06:46:04.701707 | orchestrator | Sunday 05 April 2026 06:46:02 +0000 (0:00:03.478) 0:04:35.138 ********** 2026-04-05 06:46:04.701720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:46:04.701739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:46:04.701771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:46:09.192170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:46:09.192262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:46:09.192281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:46:09.192301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:46:09.192318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:46:09.192324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:46:09.192328 | orchestrator | 2026-04-05 06:46:09.192334 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-05 06:46:09.192340 | orchestrator | Sunday 05 April 2026 06:46:07 +0000 (0:00:04.410) 0:04:39.548 ********** 2026-04-05 06:46:09.192345 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 06:46:09.192351 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:46:09.192355 | orchestrator | } 2026-04-05 06:46:09.192360 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 06:46:09.192426 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:46:09.192436 | orchestrator | } 2026-04-05 06:46:09.192443 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 06:46:09.192450 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:46:09.192456 | orchestrator | } 2026-04-05 06:46:09.192463 | orchestrator | 2026-04-05 06:46:09.192470 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 06:46:09.192476 | orchestrator | Sunday 05 April 2026 06:46:08 +0000 (0:00:01.386) 0:04:40.935 ********** 2026-04-05 06:46:09.192490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:46:09.192504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:46:09.192519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:47:47.414551 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:47:47.414672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:47:47.414693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:47:47.414814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:47:47.414830 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:47:47.414843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:47:47.414875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:47:47.414889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:47:47.414910 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:47:47.414921 | orchestrator | 2026-04-05 06:47:47.414934 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 06:47:47.414946 | orchestrator | Sunday 05 April 2026 06:46:10 +0000 (0:00:02.273) 0:04:43.208 ********** 2026-04-05 06:47:47.414957 | orchestrator | 2026-04-05 06:47:47.414968 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 06:47:47.414979 | orchestrator | Sunday 05 April 2026 06:46:11 +0000 (0:00:00.721) 0:04:43.929 ********** 2026-04-05 06:47:47.414989 | orchestrator | 2026-04-05 06:47:47.415000 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 06:47:47.415010 | orchestrator | Sunday 05 April 2026 06:46:12 +0000 (0:00:00.504) 0:04:44.434 ********** 2026-04-05 06:47:47.415021 | orchestrator | 2026-04-05 06:47:47.415032 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-05 06:47:47.415042 | orchestrator | Sunday 05 April 2026 06:46:13 +0000 (0:00:00.898) 0:04:45.333 ********** 2026-04-05 06:47:47.415059 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:47:47.415070 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:47:47.415081 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:47:47.415094 | orchestrator | 2026-04-05 06:47:47.415107 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-05 06:47:47.415120 | orchestrator | Sunday 05 April 2026 06:46:41 +0000 (0:00:28.426) 0:05:13.760 ********** 2026-04-05 06:47:47.415132 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:47:47.415145 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:47:47.415157 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:47:47.415169 | orchestrator | 2026-04-05 06:47:47.415182 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-05 06:47:47.415194 | orchestrator | Sunday 05 April 2026 06:46:55 +0000 (0:00:14.018) 0:05:27.779 ********** 2026-04-05 06:47:47.415206 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:47:47.415218 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:47:47.415232 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:47:47.415244 | orchestrator | 2026-04-05 06:47:47.415255 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-05 06:47:47.415266 | orchestrator | 2026-04-05 06:47:47.415276 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:47:47.415287 | orchestrator | Sunday 05 April 2026 06:47:06 +0000 (0:00:10.838) 0:05:38.617 ********** 2026-04-05 06:47:47.415298 | orchestrator | included: /ansible/roles/nova-cell/tasks/reconfigure.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:47:47.415309 | orchestrator | 2026-04-05 06:47:47.415320 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:47:47.415331 | orchestrator | Sunday 05 April 2026 06:47:08 +0000 (0:00:02.536) 0:05:41.153 ********** 2026-04-05 06:47:47.415341 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:47:47.415352 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:47:47.415363 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:47:47.415373 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:47:47.415384 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:47:47.415394 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:47:47.415405 | orchestrator | 2026-04-05 06:47:47.415415 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-05 06:47:47.415426 | orchestrator | Sunday 05 April 2026 06:47:11 +0000 (0:00:02.309) 0:05:43.462 ********** 2026-04-05 06:47:47.415437 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:47:47.415447 | orchestrator | 2026-04-05 06:47:47.415458 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-05 06:47:47.415476 | orchestrator | Sunday 05 April 2026 06:47:45 +0000 (0:00:34.624) 0:06:18.087 ********** 2026-04-05 06:47:47.415487 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:47:47.415499 | orchestrator | 2026-04-05 06:47:47.415515 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-05 06:48:39.320827 | orchestrator | Sunday 05 April 2026 06:47:48 +0000 (0:00:02.593) 0:06:20.680 ********** 2026-04-05 06:48:39.321006 | orchestrator | included: service-image-info for testbed-node-3 2026-04-05 06:48:39.321024 | orchestrator | 2026-04-05 06:48:39.321037 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-05 06:48:39.321048 | orchestrator | Sunday 05 April 2026 06:47:50 +0000 (0:00:02.040) 0:06:22.720 ********** 2026-04-05 06:48:39.321060 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:48:39.321071 | orchestrator | 2026-04-05 06:48:39.321082 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-05 06:48:39.321093 | orchestrator | Sunday 05 April 2026 06:47:54 +0000 (0:00:04.319) 0:06:27.040 ********** 2026-04-05 06:48:39.321104 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:48:39.321115 | orchestrator | 2026-04-05 06:48:39.321126 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-05 06:48:39.321137 | orchestrator | Sunday 05 April 2026 06:47:57 +0000 (0:00:03.126) 0:06:30.166 ********** 2026-04-05 06:48:39.321148 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:48:39.321160 | orchestrator | 2026-04-05 06:48:39.321171 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-05 06:48:39.321181 | orchestrator | Sunday 05 April 2026 06:48:01 +0000 (0:00:03.124) 0:06:33.292 ********** 2026-04-05 06:48:39.321192 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:48:39.321203 | orchestrator | 2026-04-05 06:48:39.321213 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-05 06:48:39.321224 | orchestrator | Sunday 05 April 2026 06:48:04 +0000 (0:00:03.090) 0:06:36.382 ********** 2026-04-05 06:48:39.321235 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:39.321246 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:39.321256 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:39.321267 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:48:39.321278 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:48:39.321289 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:48:39.321300 | orchestrator | 2026-04-05 06:48:39.321311 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-05 06:48:39.321321 | orchestrator | Sunday 05 April 2026 06:48:09 +0000 (0:00:05.235) 0:06:41.618 ********** 2026-04-05 06:48:39.321332 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:39.321343 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:39.321353 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:39.321364 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:48:39.321375 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:48:39.321385 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:48:39.321396 | orchestrator | 2026-04-05 06:48:39.321407 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-05 06:48:39.321417 | orchestrator | Sunday 05 April 2026 06:48:15 +0000 (0:00:05.770) 0:06:47.388 ********** 2026-04-05 06:48:39.321428 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:39.321439 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:39.321449 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:39.321460 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 06:48:39.321470 | orchestrator |  "changed": false, 2026-04-05 06:48:39.321481 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-05 06:48:39.321493 | orchestrator | } 2026-04-05 06:48:39.321504 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 06:48:39.321532 | orchestrator |  "changed": false, 2026-04-05 06:48:39.321544 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-05 06:48:39.321555 | orchestrator | } 2026-04-05 06:48:39.321591 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 06:48:39.321603 | orchestrator |  "changed": false, 2026-04-05 06:48:39.321613 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-05 06:48:39.321624 | orchestrator | } 2026-04-05 06:48:39.321635 | orchestrator | 2026-04-05 06:48:39.321645 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-05 06:48:39.321656 | orchestrator | Sunday 05 April 2026 06:48:22 +0000 (0:00:07.681) 0:06:55.070 ********** 2026-04-05 06:48:39.321666 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:39.321677 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:39.321688 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:39.321698 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:48:39.321709 | orchestrator | 2026-04-05 06:48:39.321720 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 06:48:39.321730 | orchestrator | Sunday 05 April 2026 06:48:25 +0000 (0:00:02.255) 0:06:57.325 ********** 2026-04-05 06:48:39.321741 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-05 06:48:39.321751 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-05 06:48:39.321762 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-05 06:48:39.321772 | orchestrator | 2026-04-05 06:48:39.321783 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 06:48:39.321794 | orchestrator | Sunday 05 April 2026 06:48:26 +0000 (0:00:01.711) 0:06:59.037 ********** 2026-04-05 06:48:39.321804 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-05 06:48:39.321815 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-05 06:48:39.321826 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-05 06:48:39.321836 | orchestrator | 2026-04-05 06:48:39.321847 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 06:48:39.321857 | orchestrator | Sunday 05 April 2026 06:48:29 +0000 (0:00:02.374) 0:07:01.411 ********** 2026-04-05 06:48:39.321868 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-05 06:48:39.321878 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:48:39.321888 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-05 06:48:39.321900 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:48:39.321910 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-05 06:48:39.321950 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:48:39.321970 | orchestrator | 2026-04-05 06:48:39.321989 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-05 06:48:39.322096 | orchestrator | Sunday 05 April 2026 06:48:30 +0000 (0:00:01.676) 0:07:03.087 ********** 2026-04-05 06:48:39.322117 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 06:48:39.322128 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 06:48:39.322139 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 06:48:39.322149 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 06:48:39.322160 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 06:48:39.322170 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:39.322181 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 06:48:39.322191 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 06:48:39.322202 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:39.322213 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 06:48:39.322223 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 06:48:39.322234 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:39.322244 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 06:48:39.322267 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 06:48:39.322277 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 06:48:39.322288 | orchestrator | 2026-04-05 06:48:39.322299 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-05 06:48:39.322309 | orchestrator | Sunday 05 April 2026 06:48:33 +0000 (0:00:02.326) 0:07:05.414 ********** 2026-04-05 06:48:39.322320 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:39.322330 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:39.322341 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:39.322351 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:48:39.322362 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:48:39.322372 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:48:39.322383 | orchestrator | 2026-04-05 06:48:39.322393 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-05 06:48:39.322404 | orchestrator | Sunday 05 April 2026 06:48:35 +0000 (0:00:02.170) 0:07:07.585 ********** 2026-04-05 06:48:39.322415 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:39.322425 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:39.322436 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:39.322447 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:48:39.322457 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:48:39.322468 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:48:39.322478 | orchestrator | 2026-04-05 06:48:39.322489 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 06:48:39.322500 | orchestrator | Sunday 05 April 2026 06:48:38 +0000 (0:00:02.697) 0:07:10.283 ********** 2026-04-05 06:48:39.322522 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:48:39.322537 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:48:39.322559 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841647 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841754 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841789 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841802 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841815 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841826 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841878 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841892 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841912 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:42.841924 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:48:42.842006 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:42.842075 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:42.842098 | orchestrator | 2026-04-05 06:48:42.842111 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:48:42.842123 | orchestrator | Sunday 05 April 2026 06:48:41 +0000 (0:00:03.470) 0:07:13.753 ********** 2026-04-05 06:48:42.842143 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:48:47.229099 | orchestrator | 2026-04-05 06:48:47.229183 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 06:48:47.229195 | orchestrator | Sunday 05 April 2026 06:48:43 +0000 (0:00:02.176) 0:07:15.930 ********** 2026-04-05 06:48:47.229206 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229231 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229238 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229247 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229272 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229292 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229301 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229311 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229319 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229326 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229340 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:47.229353 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:50.777137 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:50.777267 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:50.777285 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:48:50.777298 | orchestrator | 2026-04-05 06:48:50.777311 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 06:48:50.777347 | orchestrator | Sunday 05 April 2026 06:48:48 +0000 (0:00:05.126) 0:07:21.057 ********** 2026-04-05 06:48:50.777361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:48:50.777374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:48:50.777408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:48:50.777421 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:48:50.777440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:48:50.777452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:48:50.777470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:48:50.777482 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:48:50.777493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:48:50.777514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:48:53.304415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:48:53.304541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:48:53.304559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:48:53.304593 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:48:53.304609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:48:53.304623 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:48:53.304634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:48:53.304646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:48:53.304657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:48:53.304686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:48:53.304698 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:48:53.304709 | orchestrator | 2026-04-05 06:48:53.304721 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 06:48:53.304733 | orchestrator | Sunday 05 April 2026 06:48:52 +0000 (0:00:03.415) 0:07:24.472 ********** 2026-04-05 06:48:53.304750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:48:53.304770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:48:53.304782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:48:53.304793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:48:53.304804 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:48:53.304823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:49:02.097798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:49:02.097936 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:49:02.097956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:49:02.097970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:49:02.097982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:49:02.098100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:49:02.098114 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:49:02.098154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:49:02.098177 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:49:02.098189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:49:02.098201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:49:02.098212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:49:02.098223 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:49:02.098235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:49:02.098246 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:49:02.098256 | orchestrator | 2026-04-05 06:49:02.098268 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:49:02.098281 | orchestrator | Sunday 05 April 2026 06:48:55 +0000 (0:00:03.552) 0:07:28.025 ********** 2026-04-05 06:49:02.098295 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:49:02.098308 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:49:02.098321 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:49:02.098335 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:49:02.098348 | orchestrator | 2026-04-05 06:49:02.098362 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-05 06:49:02.098376 | orchestrator | Sunday 05 April 2026 06:48:58 +0000 (0:00:02.202) 0:07:30.228 ********** 2026-04-05 06:49:02.098389 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:49:02.098402 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 06:49:02.098415 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 06:49:02.098434 | orchestrator | 2026-04-05 06:49:02.098447 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-05 06:49:02.098460 | orchestrator | Sunday 05 April 2026 06:49:00 +0000 (0:00:02.062) 0:07:32.290 ********** 2026-04-05 06:49:02.098473 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:49:02.098486 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 06:49:02.098499 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 06:49:02.098513 | orchestrator | 2026-04-05 06:49:02.098526 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-05 06:49:02.098546 | orchestrator | Sunday 05 April 2026 06:49:02 +0000 (0:00:02.027) 0:07:34.318 ********** 2026-04-05 06:49:44.913057 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:49:44.913176 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:49:44.913186 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:49:44.913192 | orchestrator | 2026-04-05 06:49:44.913198 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-05 06:49:44.913218 | orchestrator | Sunday 05 April 2026 06:49:03 +0000 (0:00:01.536) 0:07:35.855 ********** 2026-04-05 06:49:44.913223 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:49:44.913228 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:49:44.913233 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:49:44.913238 | orchestrator | 2026-04-05 06:49:44.913244 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-05 06:49:44.913249 | orchestrator | Sunday 05 April 2026 06:49:05 +0000 (0:00:01.848) 0:07:37.704 ********** 2026-04-05 06:49:44.913254 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-05 06:49:44.913260 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-05 06:49:44.913265 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-05 06:49:44.913270 | orchestrator | 2026-04-05 06:49:44.913275 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-05 06:49:44.913281 | orchestrator | Sunday 05 April 2026 06:49:07 +0000 (0:00:02.200) 0:07:39.904 ********** 2026-04-05 06:49:44.913286 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-05 06:49:44.913292 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-05 06:49:44.913297 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-05 06:49:44.913302 | orchestrator | 2026-04-05 06:49:44.913307 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-05 06:49:44.913312 | orchestrator | Sunday 05 April 2026 06:49:09 +0000 (0:00:02.191) 0:07:42.096 ********** 2026-04-05 06:49:44.913317 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-05 06:49:44.913322 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-05 06:49:44.913327 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-05 06:49:44.913332 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-05 06:49:44.913337 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-05 06:49:44.913342 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-05 06:49:44.913347 | orchestrator | 2026-04-05 06:49:44.913352 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-05 06:49:44.913357 | orchestrator | Sunday 05 April 2026 06:49:14 +0000 (0:00:05.119) 0:07:47.215 ********** 2026-04-05 06:49:44.913362 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:49:44.913368 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:49:44.913373 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:49:44.913378 | orchestrator | 2026-04-05 06:49:44.913383 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-05 06:49:44.913389 | orchestrator | Sunday 05 April 2026 06:49:16 +0000 (0:00:01.699) 0:07:48.915 ********** 2026-04-05 06:49:44.913394 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:49:44.913399 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:49:44.913404 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:49:44.913409 | orchestrator | 2026-04-05 06:49:44.913414 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-05 06:49:44.913436 | orchestrator | Sunday 05 April 2026 06:49:18 +0000 (0:00:01.381) 0:07:50.297 ********** 2026-04-05 06:49:44.913442 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:49:44.913447 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:49:44.913452 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:49:44.913457 | orchestrator | 2026-04-05 06:49:44.913462 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-05 06:49:44.913467 | orchestrator | Sunday 05 April 2026 06:49:20 +0000 (0:00:02.516) 0:07:52.814 ********** 2026-04-05 06:49:44.913473 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 06:49:44.913479 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 06:49:44.913484 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 06:49:44.913494 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 06:49:44.913504 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 06:49:44.913513 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 06:49:44.913522 | orchestrator | 2026-04-05 06:49:44.913531 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-05 06:49:44.913540 | orchestrator | Sunday 05 April 2026 06:49:25 +0000 (0:00:04.509) 0:07:57.324 ********** 2026-04-05 06:49:44.913549 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 06:49:44.913558 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 06:49:44.913567 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:49:44.913577 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 06:49:44.913602 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:49:44.913612 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 06:49:44.913623 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:49:44.913632 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:49:44.913642 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:49:44.913652 | orchestrator | 2026-04-05 06:49:44.913663 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-05 06:49:44.913677 | orchestrator | Sunday 05 April 2026 06:49:29 +0000 (0:00:04.093) 0:08:01.417 ********** 2026-04-05 06:49:44.913684 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:49:44.913689 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:49:44.913695 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:49:44.913702 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:49:44.913708 | orchestrator | 2026-04-05 06:49:44.913714 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-05 06:49:44.913720 | orchestrator | Sunday 05 April 2026 06:49:32 +0000 (0:00:03.533) 0:08:04.951 ********** 2026-04-05 06:49:44.913725 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:49:44.913731 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 06:49:44.913737 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 06:49:44.913743 | orchestrator | 2026-04-05 06:49:44.913749 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-05 06:49:44.913761 | orchestrator | Sunday 05 April 2026 06:49:34 +0000 (0:00:02.032) 0:08:06.984 ********** 2026-04-05 06:49:44.913767 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:49:44.913773 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:49:44.913779 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:49:44.913785 | orchestrator | 2026-04-05 06:49:44.913790 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-05 06:49:44.913796 | orchestrator | Sunday 05 April 2026 06:49:36 +0000 (0:00:01.426) 0:08:08.411 ********** 2026-04-05 06:49:44.913802 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:49:44.913807 | orchestrator | 2026-04-05 06:49:44.913814 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-05 06:49:44.913819 | orchestrator | Sunday 05 April 2026 06:49:37 +0000 (0:00:01.139) 0:08:09.551 ********** 2026-04-05 06:49:44.913825 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:49:44.913830 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:49:44.913836 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:49:44.913842 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:49:44.913848 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:49:44.913853 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:49:44.913859 | orchestrator | 2026-04-05 06:49:44.913865 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-05 06:49:44.913871 | orchestrator | Sunday 05 April 2026 06:49:39 +0000 (0:00:01.974) 0:08:11.525 ********** 2026-04-05 06:49:44.913877 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:49:44.913882 | orchestrator | 2026-04-05 06:49:44.913888 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-05 06:49:44.913894 | orchestrator | Sunday 05 April 2026 06:49:41 +0000 (0:00:01.821) 0:08:13.346 ********** 2026-04-05 06:49:44.913900 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:49:44.913906 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:49:44.913911 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:49:44.913917 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:49:44.913923 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:49:44.913928 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:49:44.913934 | orchestrator | 2026-04-05 06:49:44.913940 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-05 06:49:44.913946 | orchestrator | Sunday 05 April 2026 06:49:43 +0000 (0:00:01.951) 0:08:15.298 ********** 2026-04-05 06:49:44.913954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:49:44.913968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771000 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:46.771374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:52.629184 | orchestrator | 2026-04-05 06:49:52.629290 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-05 06:49:52.629305 | orchestrator | Sunday 05 April 2026 06:49:47 +0000 (0:00:04.794) 0:08:20.092 ********** 2026-04-05 06:49:52.629319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:49:52.629333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:49:52.629345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:49:52.629355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:49:52.629401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:49:52.629431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:49:52.629442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:52.629454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:49:52.629465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:49:52.629483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:49:52.629505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:15.296730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:50:15.296847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:15.296866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:15.296879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:15.296919 | orchestrator | 2026-04-05 06:50:15.296933 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-05 06:50:15.296945 | orchestrator | Sunday 05 April 2026 06:49:56 +0000 (0:00:08.439) 0:08:28.532 ********** 2026-04-05 06:50:15.296956 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:50:15.296969 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:50:15.296980 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:15.296991 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:50:15.297002 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:15.297059 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:15.297074 | orchestrator | 2026-04-05 06:50:15.297085 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-05 06:50:15.297096 | orchestrator | Sunday 05 April 2026 06:49:58 +0000 (0:00:02.684) 0:08:31.217 ********** 2026-04-05 06:50:15.297107 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 06:50:15.297118 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 06:50:15.297129 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 06:50:15.297154 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 06:50:15.297165 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 06:50:15.297177 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:15.297188 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 06:50:15.297226 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 06:50:15.297237 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:15.297251 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 06:50:15.297263 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:15.297276 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 06:50:15.297308 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 06:50:15.297321 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 06:50:15.297333 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 06:50:15.297346 | orchestrator | 2026-04-05 06:50:15.297359 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-05 06:50:15.297372 | orchestrator | Sunday 05 April 2026 06:50:04 +0000 (0:00:05.377) 0:08:36.594 ********** 2026-04-05 06:50:15.297384 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:50:15.297397 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:50:15.297409 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:50:15.297422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:15.297435 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:15.297448 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:15.297460 | orchestrator | 2026-04-05 06:50:15.297473 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-05 06:50:15.297486 | orchestrator | Sunday 05 April 2026 06:50:06 +0000 (0:00:01.743) 0:08:38.338 ********** 2026-04-05 06:50:15.297499 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 06:50:15.297512 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 06:50:15.297534 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 06:50:15.297547 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 06:50:15.297560 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 06:50:15.297572 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 06:50:15.297586 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 06:50:15.297599 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 06:50:15.297609 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 06:50:15.297620 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 06:50:15.297631 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:15.297642 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 06:50:15.297652 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:15.297663 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 06:50:15.297674 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:15.297685 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 06:50:15.297696 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 06:50:15.297706 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 06:50:15.297717 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 06:50:15.297728 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 06:50:15.297739 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 06:50:15.297749 | orchestrator | 2026-04-05 06:50:15.297760 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-05 06:50:15.297771 | orchestrator | Sunday 05 April 2026 06:50:11 +0000 (0:00:05.859) 0:08:44.198 ********** 2026-04-05 06:50:15.297782 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 06:50:15.297798 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 06:50:15.297809 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 06:50:15.297820 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 06:50:15.297831 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 06:50:15.297841 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 06:50:15.297852 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 06:50:15.297863 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 06:50:15.297873 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 06:50:15.297884 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 06:50:15.297902 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 06:50:31.561097 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 06:50:31.561213 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 06:50:31.561229 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:31.561290 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 06:50:31.561301 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:31.561312 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 06:50:31.561323 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 06:50:31.561334 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 06:50:31.561345 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 06:50:31.561356 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:31.561367 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 06:50:31.561378 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 06:50:31.561388 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 06:50:31.561399 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 06:50:31.561410 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 06:50:31.561421 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 06:50:31.561432 | orchestrator | 2026-04-05 06:50:31.561443 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-05 06:50:31.561454 | orchestrator | Sunday 05 April 2026 06:50:20 +0000 (0:00:08.298) 0:08:52.496 ********** 2026-04-05 06:50:31.561465 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:50:31.561476 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:50:31.561486 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:50:31.561497 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:31.561508 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:31.561518 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:31.561529 | orchestrator | 2026-04-05 06:50:31.561540 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-05 06:50:31.561551 | orchestrator | Sunday 05 April 2026 06:50:22 +0000 (0:00:01.967) 0:08:54.464 ********** 2026-04-05 06:50:31.561562 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:50:31.561573 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:50:31.561583 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:50:31.561594 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:31.561604 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:31.561615 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:31.561626 | orchestrator | 2026-04-05 06:50:31.561639 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-05 06:50:31.561653 | orchestrator | Sunday 05 April 2026 06:50:24 +0000 (0:00:01.814) 0:08:56.278 ********** 2026-04-05 06:50:31.561665 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:31.561677 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:31.561690 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:50:31.561703 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:31.561715 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:50:31.561728 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:50:31.561740 | orchestrator | 2026-04-05 06:50:31.561752 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-05 06:50:31.561764 | orchestrator | Sunday 05 April 2026 06:50:27 +0000 (0:00:03.224) 0:08:59.503 ********** 2026-04-05 06:50:31.561777 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:31.561790 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:31.561829 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:50:31.561842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:31.561855 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:50:31.561866 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:50:31.561879 | orchestrator | 2026-04-05 06:50:31.561891 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-05 06:50:31.561904 | orchestrator | Sunday 05 April 2026 06:50:30 +0000 (0:00:03.185) 0:09:02.688 ********** 2026-04-05 06:50:31.561935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:50:31.561971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:50:31.561987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:50:31.562000 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:50:31.562011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:50:31.562082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:50:31.562153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:50:31.562167 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:50:31.562188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:50:36.721816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:50:36.721929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:50:36.721946 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:50:36.721961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:50:36.721999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:50:36.722012 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:36.722097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:50:36.722123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:50:36.722135 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:36.722175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:50:36.722205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:50:36.722227 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:36.722304 | orchestrator | 2026-04-05 06:50:36.722326 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-05 06:50:36.722347 | orchestrator | Sunday 05 April 2026 06:50:33 +0000 (0:00:02.823) 0:09:05.512 ********** 2026-04-05 06:50:36.722383 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 06:50:36.722401 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 06:50:36.722419 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:50:36.722437 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 06:50:36.722457 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 06:50:36.722475 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:50:36.722495 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 06:50:36.722514 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 06:50:36.722532 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:50:36.722551 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 06:50:36.722568 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 06:50:36.722586 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:50:36.722604 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 06:50:36.722622 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 06:50:36.722640 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:50:36.722658 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 06:50:36.722677 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 06:50:36.722696 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:50:36.722716 | orchestrator | 2026-04-05 06:50:36.722735 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-05 06:50:36.722755 | orchestrator | Sunday 05 April 2026 06:50:35 +0000 (0:00:01.951) 0:09:07.464 ********** 2026-04-05 06:50:36.722787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:50:36.722830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:38.211647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:42.848747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:50:42.848875 | orchestrator | 2026-04-05 06:50:42.848892 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-05 06:50:42.848904 | orchestrator | Sunday 05 April 2026 06:50:39 +0000 (0:00:04.335) 0:09:11.799 ********** 2026-04-05 06:50:42.848914 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 06:50:42.848925 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:50:42.848935 | orchestrator | } 2026-04-05 06:50:42.848945 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 06:50:42.848954 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:50:42.848964 | orchestrator | } 2026-04-05 06:50:42.848974 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 06:50:42.848983 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:50:42.848993 | orchestrator | } 2026-04-05 06:50:42.849002 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 06:50:42.849012 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:50:42.849021 | orchestrator | } 2026-04-05 06:50:42.849031 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 06:50:42.849041 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:50:42.849051 | orchestrator | } 2026-04-05 06:50:42.849060 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 06:50:42.849070 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:50:42.849079 | orchestrator | } 2026-04-05 06:50:42.849089 | orchestrator | 2026-04-05 06:50:42.849099 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 06:50:42.849109 | orchestrator | Sunday 05 April 2026 06:50:41 +0000 (0:00:02.114) 0:09:13.914 ********** 2026-04-05 06:50:42.849120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:50:42.849147 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:50:42.849160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:50:42.849179 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:50:42.849206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:50:42.849218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:50:42.849228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:50:42.849239 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:50:42.849253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:50:42.849303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:50:42.849336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:53:39.293718 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:53:39.293821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:53:39.293842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:53:39.293855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:53:39.293867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:53:39.293892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:53:39.293924 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:53:39.293937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:53:39.293949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:53:39.293960 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:53:39.293971 | orchestrator | 2026-04-05 06:53:39.293983 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:53:39.294012 | orchestrator | Sunday 05 April 2026 06:50:44 +0000 (0:00:03.291) 0:09:17.206 ********** 2026-04-05 06:53:39.294084 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:53:39.294096 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:53:39.294107 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:53:39.294118 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:53:39.294128 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:53:39.294139 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:53:39.294150 | orchestrator | 2026-04-05 06:53:39.294161 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 06:53:39.294172 | orchestrator | Sunday 05 April 2026 06:50:47 +0000 (0:00:02.017) 0:09:19.224 ********** 2026-04-05 06:53:39.294182 | orchestrator | 2026-04-05 06:53:39.294193 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 06:53:39.294204 | orchestrator | Sunday 05 April 2026 06:50:47 +0000 (0:00:00.768) 0:09:19.992 ********** 2026-04-05 06:53:39.294215 | orchestrator | 2026-04-05 06:53:39.294227 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 06:53:39.294240 | orchestrator | Sunday 05 April 2026 06:50:48 +0000 (0:00:00.533) 0:09:20.526 ********** 2026-04-05 06:53:39.294253 | orchestrator | 2026-04-05 06:53:39.294266 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 06:53:39.294279 | orchestrator | Sunday 05 April 2026 06:50:48 +0000 (0:00:00.498) 0:09:21.025 ********** 2026-04-05 06:53:39.294291 | orchestrator | 2026-04-05 06:53:39.294304 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 06:53:39.294316 | orchestrator | Sunday 05 April 2026 06:50:49 +0000 (0:00:00.520) 0:09:21.546 ********** 2026-04-05 06:53:39.294328 | orchestrator | 2026-04-05 06:53:39.294341 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 06:53:39.294353 | orchestrator | Sunday 05 April 2026 06:50:49 +0000 (0:00:00.493) 0:09:22.039 ********** 2026-04-05 06:53:39.294365 | orchestrator | 2026-04-05 06:53:39.294377 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-05 06:53:39.294389 | orchestrator | Sunday 05 April 2026 06:50:50 +0000 (0:00:01.118) 0:09:23.158 ********** 2026-04-05 06:53:39.294402 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:53:39.294414 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:53:39.294426 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:53:39.294439 | orchestrator | 2026-04-05 06:53:39.294459 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-05 06:53:39.294472 | orchestrator | Sunday 05 April 2026 06:51:05 +0000 (0:00:14.919) 0:09:38.077 ********** 2026-04-05 06:53:39.294485 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:53:39.294498 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:53:39.294510 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:53:39.294521 | orchestrator | 2026-04-05 06:53:39.294532 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-05 06:53:39.294543 | orchestrator | Sunday 05 April 2026 06:51:28 +0000 (0:00:22.622) 0:10:00.699 ********** 2026-04-05 06:53:39.294554 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:53:39.294564 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:53:39.294575 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:53:39.294586 | orchestrator | 2026-04-05 06:53:39.294597 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-05 06:53:39.294607 | orchestrator | Sunday 05 April 2026 06:51:55 +0000 (0:00:27.057) 0:10:27.757 ********** 2026-04-05 06:53:39.294618 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:53:39.294629 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:53:39.294663 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:53:39.294675 | orchestrator | 2026-04-05 06:53:39.294686 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-05 06:53:39.294697 | orchestrator | Sunday 05 April 2026 06:52:39 +0000 (0:00:44.096) 0:11:11.853 ********** 2026-04-05 06:53:39.294707 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:53:39.294718 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:53:39.294729 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-05 06:53:39.294740 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:53:39.294751 | orchestrator | 2026-04-05 06:53:39.294762 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-05 06:53:39.294773 | orchestrator | Sunday 05 April 2026 06:52:46 +0000 (0:00:07.275) 0:11:19.129 ********** 2026-04-05 06:53:39.294783 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:53:39.294794 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:53:39.294805 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:53:39.294815 | orchestrator | 2026-04-05 06:53:39.294826 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-05 06:53:39.294836 | orchestrator | Sunday 05 April 2026 06:52:48 +0000 (0:00:01.799) 0:11:20.929 ********** 2026-04-05 06:53:39.294847 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:53:39.294857 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:53:39.294868 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:53:39.294879 | orchestrator | 2026-04-05 06:53:39.294890 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-05 06:53:39.294901 | orchestrator | Sunday 05 April 2026 06:53:20 +0000 (0:00:31.488) 0:11:52.417 ********** 2026-04-05 06:53:39.294912 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:53:39.294922 | orchestrator | 2026-04-05 06:53:39.294933 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-05 06:53:39.294943 | orchestrator | Sunday 05 April 2026 06:53:21 +0000 (0:00:01.493) 0:11:53.910 ********** 2026-04-05 06:53:39.294954 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:53:39.294965 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:53:39.294975 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:53:39.294986 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:53:39.294997 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:53:39.295007 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:53:39.295018 | orchestrator | 2026-04-05 06:53:39.295029 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-05 06:53:39.295039 | orchestrator | Sunday 05 April 2026 06:53:30 +0000 (0:00:09.086) 0:12:02.997 ********** 2026-04-05 06:53:39.295050 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:53:39.295076 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:54:31.996196 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:54:31.996309 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:54:31.996324 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:54:31.996335 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:54:31.996345 | orchestrator | 2026-04-05 06:54:31.996357 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-05 06:54:31.996368 | orchestrator | Sunday 05 April 2026 06:53:41 +0000 (0:00:11.095) 0:12:14.093 ********** 2026-04-05 06:54:31.996378 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:54:31.996388 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:54:31.996397 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:54:31.996431 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:54:31.996448 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:54:31.996481 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-05 06:54:31.996506 | orchestrator | 2026-04-05 06:54:31.996524 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-05 06:54:31.996542 | orchestrator | Sunday 05 April 2026 06:53:47 +0000 (0:00:05.548) 0:12:19.641 ********** 2026-04-05 06:54:31.996557 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:54:31.996575 | orchestrator | 2026-04-05 06:54:31.996591 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-05 06:54:31.996608 | orchestrator | Sunday 05 April 2026 06:54:00 +0000 (0:00:12.975) 0:12:32.617 ********** 2026-04-05 06:54:31.996625 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:54:31.996642 | orchestrator | 2026-04-05 06:54:31.996659 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-05 06:54:31.996677 | orchestrator | Sunday 05 April 2026 06:54:03 +0000 (0:00:02.922) 0:12:35.540 ********** 2026-04-05 06:54:31.996695 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:54:31.996713 | orchestrator | 2026-04-05 06:54:31.996756 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-05 06:54:31.996771 | orchestrator | Sunday 05 April 2026 06:54:05 +0000 (0:00:02.558) 0:12:38.098 ********** 2026-04-05 06:54:31.996783 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-05 06:54:31.996794 | orchestrator | 2026-04-05 06:54:31.996805 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-05 06:54:31.996817 | orchestrator | 2026-04-05 06:54:31.996828 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-05 06:54:31.996839 | orchestrator | Sunday 05 April 2026 06:54:19 +0000 (0:00:13.206) 0:12:51.305 ********** 2026-04-05 06:54:31.996850 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:54:31.996862 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:54:31.996873 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:54:31.996884 | orchestrator | 2026-04-05 06:54:31.996895 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-05 06:54:31.996906 | orchestrator | 2026-04-05 06:54:31.996917 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-05 06:54:31.996928 | orchestrator | Sunday 05 April 2026 06:54:21 +0000 (0:00:02.340) 0:12:53.645 ********** 2026-04-05 06:54:31.996939 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:54:31.996951 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:54:31.996962 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:54:31.996973 | orchestrator | 2026-04-05 06:54:31.997001 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-05 06:54:31.997013 | orchestrator | 2026-04-05 06:54:31.997024 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-05 06:54:31.997035 | orchestrator | Sunday 05 April 2026 06:54:23 +0000 (0:00:01.717) 0:12:55.362 ********** 2026-04-05 06:54:31.997046 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-05 06:54:31.997058 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 06:54:31.997102 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 06:54:31.997119 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-05 06:54:31.997134 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-05 06:54:31.997149 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-05 06:54:31.997163 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:54:31.997177 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-05 06:54:31.997191 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 06:54:31.997206 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 06:54:31.997221 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-05 06:54:31.997236 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-05 06:54:31.997253 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-05 06:54:31.997268 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:54:31.997284 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-05 06:54:31.997298 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 06:54:31.997307 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 06:54:31.997317 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-05 06:54:31.997326 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-05 06:54:31.997335 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-05 06:54:31.997345 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:54:31.997354 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-05 06:54:31.997364 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 06:54:31.997373 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 06:54:31.997382 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-05 06:54:31.997392 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-05 06:54:31.997420 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-05 06:54:31.997430 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:54:31.997439 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-05 06:54:31.997449 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 06:54:31.997458 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 06:54:31.997468 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-05 06:54:31.997477 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-05 06:54:31.997486 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-05 06:54:31.997495 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:54:31.997505 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-05 06:54:31.997514 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 06:54:31.997524 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 06:54:31.997533 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-05 06:54:31.997542 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-05 06:54:31.997552 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-05 06:54:31.997561 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:54:31.997570 | orchestrator | 2026-04-05 06:54:31.997580 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-05 06:54:31.997589 | orchestrator | 2026-04-05 06:54:31.997599 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-05 06:54:31.997608 | orchestrator | Sunday 05 April 2026 06:54:25 +0000 (0:00:02.610) 0:12:57.973 ********** 2026-04-05 06:54:31.997628 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-05 06:54:31.997637 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-05 06:54:31.997647 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:54:31.997656 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-05 06:54:31.997665 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-05 06:54:31.997675 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:54:31.997684 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-05 06:54:31.997693 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-05 06:54:31.997703 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:54:31.997712 | orchestrator | 2026-04-05 06:54:31.997721 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-05 06:54:31.997753 | orchestrator | 2026-04-05 06:54:31.997763 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-05 06:54:31.997772 | orchestrator | Sunday 05 April 2026 06:54:27 +0000 (0:00:01.914) 0:12:59.888 ********** 2026-04-05 06:54:31.997781 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:54:31.997791 | orchestrator | 2026-04-05 06:54:31.997800 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-05 06:54:31.997810 | orchestrator | 2026-04-05 06:54:31.997819 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-05 06:54:31.997828 | orchestrator | Sunday 05 April 2026 06:54:29 +0000 (0:00:01.924) 0:13:01.813 ********** 2026-04-05 06:54:31.997838 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:54:31.997853 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:54:31.997863 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:54:31.997873 | orchestrator | 2026-04-05 06:54:31.997882 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 06:54:31.997892 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 06:54:31.997903 | orchestrator | testbed-node-0 : ok=58  changed=25  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-05 06:54:31.997913 | orchestrator | testbed-node-1 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-05 06:54:31.997922 | orchestrator | testbed-node-2 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-05 06:54:31.997931 | orchestrator | testbed-node-3 : ok=49  changed=15  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-05 06:54:31.997941 | orchestrator | testbed-node-4 : ok=48  changed=14  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-05 06:54:31.997950 | orchestrator | testbed-node-5 : ok=43  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 06:54:31.997959 | orchestrator | 2026-04-05 06:54:31.997969 | orchestrator | 2026-04-05 06:54:31.997978 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 06:54:31.997988 | orchestrator | Sunday 05 April 2026 06:54:31 +0000 (0:00:02.393) 0:13:04.206 ********** 2026-04-05 06:54:31.997997 | orchestrator | =============================================================================== 2026-04-05 06:54:31.998006 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.10s 2026-04-05 06:54:31.998074 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 34.62s 2026-04-05 06:54:31.998088 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.48s 2026-04-05 06:54:31.998098 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.49s 2026-04-05 06:54:31.998115 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 28.43s 2026-04-05 06:54:32.348141 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 27.06s 2026-04-05 06:54:32.348243 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 22.62s 2026-04-05 06:54:32.348258 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.66s 2026-04-05 06:54:32.348270 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.39s 2026-04-05 06:54:32.348282 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 14.92s 2026-04-05 06:54:32.348292 | orchestrator | nova : Restart nova-api container -------------------------------------- 14.02s 2026-04-05 06:54:32.348303 | orchestrator | nova-cell : Update cell ------------------------------------------------ 13.83s 2026-04-05 06:54:32.348314 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.30s 2026-04-05 06:54:32.348325 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.21s 2026-04-05 06:54:32.348335 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.98s 2026-04-05 06:54:32.348346 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 12.57s 2026-04-05 06:54:32.348356 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 11.87s 2026-04-05 06:54:32.348367 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.09s 2026-04-05 06:54:32.348377 | orchestrator | nova : Restart nova-metadata container --------------------------------- 10.84s 2026-04-05 06:54:32.348388 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.21s 2026-04-05 06:54:32.539084 | orchestrator | + osism apply nova-update-cell-mappings 2026-04-05 06:54:44.004932 | orchestrator | 2026-04-05 06:54:44 | INFO  | Prepare task for execution of nova-update-cell-mappings. 2026-04-05 06:54:44.075946 | orchestrator | 2026-04-05 06:54:44 | INFO  | Task 4b3588c9-ef01-4ad0-862a-e0717e1502b8 (nova-update-cell-mappings) was prepared for execution. 2026-04-05 06:54:44.076048 | orchestrator | 2026-04-05 06:54:44 | INFO  | It takes a moment until task 4b3588c9-ef01-4ad0-862a-e0717e1502b8 (nova-update-cell-mappings) has been started and output is visible here. 2026-04-05 06:55:07.613590 | orchestrator | 2026-04-05 06:55:07.613689 | orchestrator | PLAY [Update Nova cell mappings] *********************************************** 2026-04-05 06:55:07.613706 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 06:55:07.613720 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 06:55:07.613743 | orchestrator | 2026-04-05 06:55:07.613755 | orchestrator | TASK [Get list of Nova cells] ************************************************** 2026-04-05 06:55:07.613767 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 06:55:07.613832 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 06:55:07.613856 | orchestrator | Sunday 05 April 2026 06:54:48 +0000 (0:00:01.221) 0:00:01.221 ********** 2026-04-05 06:55:07.613867 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:55:07.613878 | orchestrator | 2026-04-05 06:55:07.613889 | orchestrator | TASK [Parse cell information] ************************************************** 2026-04-05 06:55:07.613900 | orchestrator | Sunday 05 April 2026 06:55:01 +0000 (0:00:13.000) 0:00:14.222 ********** 2026-04-05 06:55:07.613911 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:55:07.613922 | orchestrator | 2026-04-05 06:55:07.613933 | orchestrator | TASK [Display cells to update] ************************************************* 2026-04-05 06:55:07.613943 | orchestrator | Sunday 05 April 2026 06:55:01 +0000 (0:00:00.142) 0:00:14.365 ********** 2026-04-05 06:55:07.613954 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 06:55:07.613965 | orchestrator |  "msg": "Cells to update: [{'name': '', 'uuid': '5d817771-63c2-4b0a-aab3-057927580a68'}]" 2026-04-05 06:55:07.613998 | orchestrator | } 2026-04-05 06:55:07.614010 | orchestrator | 2026-04-05 06:55:07.614076 | orchestrator | TASK [Use specified cell UUID if provided] ************************************* 2026-04-05 06:55:07.614088 | orchestrator | Sunday 05 April 2026 06:55:02 +0000 (0:00:00.134) 0:00:14.499 ********** 2026-04-05 06:55:07.614099 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:55:07.614110 | orchestrator | 2026-04-05 06:55:07.614121 | orchestrator | TASK [Abort if multiple cells found without specific UUID and abort_on_multiple is enabled] *** 2026-04-05 06:55:07.614132 | orchestrator | Sunday 05 April 2026 06:55:02 +0000 (0:00:00.123) 0:00:14.622 ********** 2026-04-05 06:55:07.614143 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:55:07.614163 | orchestrator | 2026-04-05 06:55:07.614192 | orchestrator | TASK [Update Nova cell mappings] *********************************************** 2026-04-05 06:55:07.614215 | orchestrator | Sunday 05 April 2026 06:55:02 +0000 (0:00:00.127) 0:00:14.750 ********** 2026-04-05 06:55:07.614227 | orchestrator | changed: [testbed-node-0] => (item=5d817771-63c2-4b0a-aab3-057927580a68) 2026-04-05 06:55:07.614241 | orchestrator | 2026-04-05 06:55:07.614253 | orchestrator | TASK [Display update results] ************************************************** 2026-04-05 06:55:07.614266 | orchestrator | Sunday 05 April 2026 06:55:06 +0000 (0:00:04.344) 0:00:19.094 ********** 2026-04-05 06:55:07.614278 | orchestrator | ok: [testbed-node-0] => (item=5d817771-63c2-4b0a-aab3-057927580a68) => { 2026-04-05 06:55:07.614290 | orchestrator |  "msg": "Cell 5d817771-63c2-4b0a-aab3-057927580a68 updated successfully" 2026-04-05 06:55:07.614303 | orchestrator | } 2026-04-05 06:55:07.614315 | orchestrator | 2026-04-05 06:55:07.614328 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 06:55:07.614341 | orchestrator | testbed-node-0 : ok=5  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 06:55:07.614355 | orchestrator | 2026-04-05 06:55:07.614367 | orchestrator | 2026-04-05 06:55:07.614380 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 06:55:07.614393 | orchestrator | Sunday 05 April 2026 06:55:07 +0000 (0:00:00.770) 0:00:19.865 ********** 2026-04-05 06:55:07.614405 | orchestrator | =============================================================================== 2026-04-05 06:55:07.614418 | orchestrator | Get list of Nova cells ------------------------------------------------- 13.00s 2026-04-05 06:55:07.614431 | orchestrator | Update Nova cell mappings ----------------------------------------------- 4.34s 2026-04-05 06:55:07.614447 | orchestrator | Display update results -------------------------------------------------- 0.77s 2026-04-05 06:55:07.614465 | orchestrator | Parse cell information -------------------------------------------------- 0.14s 2026-04-05 06:55:07.614482 | orchestrator | Display cells to update ------------------------------------------------- 0.13s 2026-04-05 06:55:07.614495 | orchestrator | Abort if multiple cells found without specific UUID and abort_on_multiple is enabled --- 0.13s 2026-04-05 06:55:07.614509 | orchestrator | Use specified cell UUID if provided ------------------------------------- 0.12s 2026-04-05 06:55:07.741626 | orchestrator | + osism apply -a upgrade nova 2026-04-05 06:55:08.884419 | orchestrator | 2026-04-05 06:55:08 | INFO  | Prepare task for execution of nova. 2026-04-05 06:55:08.940633 | orchestrator | 2026-04-05 06:55:08 | INFO  | Task 96142f63-1e47-47de-8a6a-6c04784e34c8 (nova) was prepared for execution. 2026-04-05 06:55:08.940719 | orchestrator | 2026-04-05 06:55:08 | INFO  | It takes a moment until task 96142f63-1e47-47de-8a6a-6c04784e34c8 (nova) has been started and output is visible here. 2026-04-05 06:56:20.934953 | orchestrator | 2026-04-05 06:56:20.935074 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 06:56:20.935092 | orchestrator | 2026-04-05 06:56:20.935105 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-05 06:56:20.935116 | orchestrator | Sunday 05 April 2026 06:55:13 +0000 (0:00:01.586) 0:00:01.586 ********** 2026-04-05 06:56:20.935242 | orchestrator | changed: [testbed-manager] 2026-04-05 06:56:20.935268 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:56:20.935297 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:56:20.935318 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:56:20.935337 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:56:20.935357 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:56:20.935377 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:56:20.935396 | orchestrator | 2026-04-05 06:56:20.935416 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 06:56:20.935437 | orchestrator | Sunday 05 April 2026 06:55:18 +0000 (0:00:04.186) 0:00:05.772 ********** 2026-04-05 06:56:20.935459 | orchestrator | changed: [testbed-manager] 2026-04-05 06:56:20.935479 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:56:20.935500 | orchestrator | changed: [testbed-node-1] 2026-04-05 06:56:20.935522 | orchestrator | changed: [testbed-node-2] 2026-04-05 06:56:20.935563 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:56:20.935588 | orchestrator | changed: [testbed-node-4] 2026-04-05 06:56:20.935609 | orchestrator | changed: [testbed-node-5] 2026-04-05 06:56:20.935624 | orchestrator | 2026-04-05 06:56:20.935637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 06:56:20.935654 | orchestrator | Sunday 05 April 2026 06:55:20 +0000 (0:00:02.114) 0:00:07.887 ********** 2026-04-05 06:56:20.935678 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-05 06:56:20.935703 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-05 06:56:20.935721 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-05 06:56:20.935739 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-05 06:56:20.935756 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-05 06:56:20.935773 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-05 06:56:20.935792 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-05 06:56:20.935813 | orchestrator | 2026-04-05 06:56:20.935830 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-05 06:56:20.935847 | orchestrator | 2026-04-05 06:56:20.935866 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 06:56:20.935882 | orchestrator | Sunday 05 April 2026 06:55:22 +0000 (0:00:02.474) 0:00:10.361 ********** 2026-04-05 06:56:20.935944 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:56:20.935960 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:20.935971 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:20.935982 | orchestrator | 2026-04-05 06:56:20.935993 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 06:56:20.936003 | orchestrator | Sunday 05 April 2026 06:55:24 +0000 (0:00:02.319) 0:00:12.681 ********** 2026-04-05 06:56:20.936014 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:56:20.936025 | orchestrator | 2026-04-05 06:56:20.936035 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 06:56:20.936046 | orchestrator | Sunday 05 April 2026 06:55:27 +0000 (0:00:02.565) 0:00:15.246 ********** 2026-04-05 06:56:20.936057 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936068 | orchestrator | 2026-04-05 06:56:20.936079 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-05 06:56:20.936090 | orchestrator | Sunday 05 April 2026 06:55:29 +0000 (0:00:01.858) 0:00:17.104 ********** 2026-04-05 06:56:20.936100 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936111 | orchestrator | 2026-04-05 06:56:20.936121 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-05 06:56:20.936133 | orchestrator | Sunday 05 April 2026 06:55:31 +0000 (0:00:02.126) 0:00:19.231 ********** 2026-04-05 06:56:20.936143 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936154 | orchestrator | 2026-04-05 06:56:20.936165 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-05 06:56:20.936190 | orchestrator | Sunday 05 April 2026 06:55:35 +0000 (0:00:04.084) 0:00:23.315 ********** 2026-04-05 06:56:20.936201 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936271 | orchestrator | 2026-04-05 06:56:20.936284 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-05 06:56:20.936294 | orchestrator | 2026-04-05 06:56:20.936305 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-05 06:56:20.936316 | orchestrator | Sunday 05 April 2026 06:55:54 +0000 (0:00:18.966) 0:00:42.282 ********** 2026-04-05 06:56:20.936326 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:56:20.936337 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:20.936348 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:20.936358 | orchestrator | 2026-04-05 06:56:20.936369 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-05 06:56:20.936380 | orchestrator | Sunday 05 April 2026 06:55:55 +0000 (0:00:01.421) 0:00:43.703 ********** 2026-04-05 06:56:20.936391 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:56:20.936401 | orchestrator | 2026-04-05 06:56:20.936412 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 06:56:20.936422 | orchestrator | Sunday 05 April 2026 06:55:57 +0000 (0:00:01.701) 0:00:45.404 ********** 2026-04-05 06:56:20.936433 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:20.936444 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:20.936454 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936465 | orchestrator | 2026-04-05 06:56:20.936476 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-05 06:56:20.936486 | orchestrator | Sunday 05 April 2026 06:55:59 +0000 (0:00:01.508) 0:00:46.913 ********** 2026-04-05 06:56:20.936497 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:20.936508 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:20.936518 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936529 | orchestrator | 2026-04-05 06:56:20.936566 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-05 06:56:20.936588 | orchestrator | Sunday 05 April 2026 06:56:01 +0000 (0:00:02.004) 0:00:48.918 ********** 2026-04-05 06:56:20.936616 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:20.936640 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:20.936660 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936680 | orchestrator | 2026-04-05 06:56:20.936699 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-05 06:56:20.936719 | orchestrator | Sunday 05 April 2026 06:56:04 +0000 (0:00:03.612) 0:00:52.531 ********** 2026-04-05 06:56:20.936739 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:20.936760 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:20.936782 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:56:20.936803 | orchestrator | 2026-04-05 06:56:20.936825 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-05 06:56:20.936845 | orchestrator | 2026-04-05 06:56:20.936867 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 06:56:20.936941 | orchestrator | Sunday 05 April 2026 06:56:17 +0000 (0:00:12.557) 0:01:05.088 ********** 2026-04-05 06:56:20.936981 | orchestrator | included: /ansible/roles/nova/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:56:20.937000 | orchestrator | 2026-04-05 06:56:20.937018 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-05 06:56:20.937037 | orchestrator | Sunday 05 April 2026 06:56:19 +0000 (0:00:02.293) 0:01:07.382 ********** 2026-04-05 06:56:20.937062 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:20.937105 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:20.937148 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:32.401508 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:32.401628 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:32.401671 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:32.401686 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:32.401717 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:32.401736 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:32.401749 | orchestrator | 2026-04-05 06:56:32.401762 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-05 06:56:32.401784 | orchestrator | Sunday 05 April 2026 06:56:22 +0000 (0:00:03.210) 0:01:10.592 ********** 2026-04-05 06:56:32.401795 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:56:32.401808 | orchestrator | 2026-04-05 06:56:32.401819 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-05 06:56:32.401829 | orchestrator | Sunday 05 April 2026 06:56:23 +0000 (0:00:01.104) 0:01:11.697 ********** 2026-04-05 06:56:32.401840 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:56:32.401851 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:32.401861 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:32.401872 | orchestrator | 2026-04-05 06:56:32.401883 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-05 06:56:32.401894 | orchestrator | Sunday 05 April 2026 06:56:25 +0000 (0:00:01.630) 0:01:13.327 ********** 2026-04-05 06:56:32.401935 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 06:56:32.401948 | orchestrator | 2026-04-05 06:56:32.401958 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-05 06:56:32.401969 | orchestrator | Sunday 05 April 2026 06:56:27 +0000 (0:00:02.131) 0:01:15.458 ********** 2026-04-05 06:56:32.401980 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:56:32.401990 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:32.402001 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:32.402012 | orchestrator | 2026-04-05 06:56:32.402083 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-05 06:56:32.402103 | orchestrator | Sunday 05 April 2026 06:56:29 +0000 (0:00:01.332) 0:01:16.791 ********** 2026-04-05 06:56:32.402122 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:56:32.402141 | orchestrator | 2026-04-05 06:56:32.402159 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 06:56:32.402177 | orchestrator | Sunday 05 April 2026 06:56:30 +0000 (0:00:01.946) 0:01:18.737 ********** 2026-04-05 06:56:32.402196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:32.402232 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:35.824474 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:35.824567 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:35.824583 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:35.824609 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:35.824647 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:35.824661 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:35.824671 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:35.824681 | orchestrator | 2026-04-05 06:56:35.824692 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 06:56:35.824702 | orchestrator | Sunday 05 April 2026 06:56:35 +0000 (0:00:04.392) 0:01:23.130 ********** 2026-04-05 06:56:35.824714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:35.824732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:37.566160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:56:37.566262 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:56:37.566281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:37.566295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:37.566307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:56:37.566340 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:37.566377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:37.566390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:37.566402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:56:37.566413 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:37.566423 | orchestrator | 2026-04-05 06:56:37.566434 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 06:56:37.566446 | orchestrator | Sunday 05 April 2026 06:56:37 +0000 (0:00:01.743) 0:01:24.873 ********** 2026-04-05 06:56:37.566457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:37.566483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:40.860610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:56:40.860716 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:56:40.860736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:40.860750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:40.860792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:56:40.860805 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:56:40.860843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:40.860856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:40.860869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:56:40.860880 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:56:40.860891 | orchestrator | 2026-04-05 06:56:40.860903 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-05 06:56:40.860989 | orchestrator | Sunday 05 April 2026 06:56:39 +0000 (0:00:02.204) 0:01:27.078 ********** 2026-04-05 06:56:40.861002 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:40.861028 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:48.875630 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:48.875748 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:48.875789 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:48.875819 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:48.875879 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:48.875894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:48.875907 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:48.875992 | orchestrator | 2026-04-05 06:56:48.876007 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-05 06:56:48.876020 | orchestrator | Sunday 05 April 2026 06:56:45 +0000 (0:00:06.225) 0:01:33.303 ********** 2026-04-05 06:56:48.876032 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:48.876060 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:55.935824 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:55.935918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:55.936012 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:55.936050 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:56:55.936061 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:55.936070 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:55.936083 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:56:55.936092 | orchestrator | 2026-04-05 06:56:55.936101 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-05 06:56:55.936110 | orchestrator | Sunday 05 April 2026 06:56:55 +0000 (0:00:09.945) 0:01:43.249 ********** 2026-04-05 06:56:55.936118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:56:55.936135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:07.801354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:07.801474 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:57:07.801522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:07.801537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:07.801564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:07.801577 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:57:07.801608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:07.801631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:07.801643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:07.801655 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:57:07.801666 | orchestrator | 2026-04-05 06:57:07.801678 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-05 06:57:07.801690 | orchestrator | Sunday 05 April 2026 06:56:57 +0000 (0:00:02.068) 0:01:45.317 ********** 2026-04-05 06:57:07.801702 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:57:07.801713 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:57:07.801724 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:57:07.801735 | orchestrator | 2026-04-05 06:57:07.801746 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-05 06:57:07.801757 | orchestrator | Sunday 05 April 2026 06:56:59 +0000 (0:00:02.016) 0:01:47.333 ********** 2026-04-05 06:57:07.801767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:57:07.801778 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:57:07.801789 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:57:07.801800 | orchestrator | 2026-04-05 06:57:07.801811 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-05 06:57:07.801821 | orchestrator | Sunday 05 April 2026 06:57:01 +0000 (0:00:01.753) 0:01:49.087 ********** 2026-04-05 06:57:07.801833 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-05 06:57:07.801844 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-05 06:57:07.801855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:57:07.801870 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-05 06:57:07.801882 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-05 06:57:07.801894 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:57:07.801908 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-05 06:57:07.801921 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-05 06:57:07.801933 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:57:07.802088 | orchestrator | 2026-04-05 06:57:07.802104 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-05 06:57:07.802117 | orchestrator | Sunday 05 April 2026 06:57:02 +0000 (0:00:01.388) 0:01:50.475 ********** 2026-04-05 06:57:07.802130 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-05 06:57:07.802155 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-05 06:57:07.802166 | orchestrator | 2026-04-05 06:57:07.802176 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-05 06:57:07.802187 | orchestrator | Sunday 05 April 2026 06:57:05 +0000 (0:00:03.025) 0:01:53.501 ********** 2026-04-05 06:57:07.802198 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:57:07.802208 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:57:07.802219 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:57:07.802230 | orchestrator | 2026-04-05 06:57:32.674328 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-05 06:57:32.674443 | orchestrator | Sunday 05 April 2026 06:57:08 +0000 (0:00:02.822) 0:01:56.323 ********** 2026-04-05 06:57:32.674458 | orchestrator | ok: [testbed-node-0] 2026-04-05 06:57:32.674470 | orchestrator | ok: [testbed-node-1] 2026-04-05 06:57:32.674481 | orchestrator | ok: [testbed-node-2] 2026-04-05 06:57:32.674492 | orchestrator | 2026-04-05 06:57:32.674504 | orchestrator | TASK [nova : Run Nova upgrade checks] ****************************************** 2026-04-05 06:57:32.674515 | orchestrator | Sunday 05 April 2026 06:57:12 +0000 (0:00:03.527) 0:01:59.851 ********** 2026-04-05 06:57:32.674526 | orchestrator | changed: [testbed-node-0] 2026-04-05 06:57:32.674538 | orchestrator | 2026-04-05 06:57:32.674549 | orchestrator | TASK [nova : Upgrade status check result] ************************************** 2026-04-05 06:57:32.674560 | orchestrator | Sunday 05 April 2026 06:57:30 +0000 (0:00:18.184) 0:02:18.036 ********** 2026-04-05 06:57:32.674570 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:57:32.674581 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:57:32.674592 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:57:32.674603 | orchestrator | 2026-04-05 06:57:32.674614 | orchestrator | TASK [nova : Stopping top level nova services] ********************************* 2026-04-05 06:57:32.674624 | orchestrator | Sunday 05 April 2026 06:57:31 +0000 (0:00:01.447) 0:02:19.484 ********** 2026-04-05 06:57:32.674641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:32.674659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:32.674733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:32.674748 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:57:32.674779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:32.674793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:32.674805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:32.674817 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:57:32.674833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:32.674864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:38.003666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:38.003796 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:57:38.003816 | orchestrator | 2026-04-05 06:57:38.003829 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-05 06:57:38.003842 | orchestrator | Sunday 05 April 2026 06:57:34 +0000 (0:00:02.429) 0:02:21.913 ********** 2026-04-05 06:57:38.003855 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:57:38.003887 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:57:38.003934 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:57:38.004067 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:57:38.004097 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:57:38.004151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 06:57:38.004173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:57:38.004207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:57:41.671284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 06:57:41.671377 | orchestrator | 2026-04-05 06:57:41.671390 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-05 06:57:41.671399 | orchestrator | Sunday 05 April 2026 06:57:39 +0000 (0:00:05.082) 0:02:26.995 ********** 2026-04-05 06:57:41.671407 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 06:57:41.671416 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:57:41.671424 | orchestrator | } 2026-04-05 06:57:41.671431 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 06:57:41.671438 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:57:41.671444 | orchestrator | } 2026-04-05 06:57:41.671452 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 06:57:41.671459 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 06:57:41.671465 | orchestrator | } 2026-04-05 06:57:41.671471 | orchestrator | 2026-04-05 06:57:41.671478 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 06:57:41.671506 | orchestrator | Sunday 05 April 2026 06:57:40 +0000 (0:00:01.497) 0:02:28.493 ********** 2026-04-05 06:57:41.671516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:41.671553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:41.671564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:41.671572 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:57:41.671595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:41.671610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:41.671621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:57:41.671627 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:57:41.671634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:57:41.671648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 06:58:25.345268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 06:58:25.345413 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:25.345432 | orchestrator | 2026-04-05 06:58:25.345445 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 06:58:25.345457 | orchestrator | Sunday 05 April 2026 06:57:42 +0000 (0:00:02.199) 0:02:30.693 ********** 2026-04-05 06:58:25.345468 | orchestrator | 2026-04-05 06:58:25.345479 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 06:58:25.345490 | orchestrator | Sunday 05 April 2026 06:57:43 +0000 (0:00:00.521) 0:02:31.215 ********** 2026-04-05 06:58:25.345501 | orchestrator | 2026-04-05 06:58:25.345512 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-05 06:58:25.345522 | orchestrator | Sunday 05 April 2026 06:57:43 +0000 (0:00:00.532) 0:02:31.747 ********** 2026-04-05 06:58:25.345533 | orchestrator | 2026-04-05 06:58:25.345544 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-05 06:58:25.345554 | orchestrator | 2026-04-05 06:58:25.345565 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:58:25.345576 | orchestrator | Sunday 05 April 2026 06:57:45 +0000 (0:00:01.975) 0:02:33.722 ********** 2026-04-05 06:58:25.345589 | orchestrator | included: /ansible/roles/nova-cell/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:58:25.345601 | orchestrator | 2026-04-05 06:58:25.345627 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-05 06:58:25.345638 | orchestrator | Sunday 05 April 2026 06:57:48 +0000 (0:00:02.738) 0:02:36.461 ********** 2026-04-05 06:58:25.345650 | orchestrator | changed: [testbed-node-3] 2026-04-05 06:58:25.345661 | orchestrator | 2026-04-05 06:58:25.345671 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-05 06:58:25.345682 | orchestrator | Sunday 05 April 2026 06:57:53 +0000 (0:00:04.431) 0:02:40.893 ********** 2026-04-05 06:58:25.345693 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:58:25.345705 | orchestrator | 2026-04-05 06:58:25.345716 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-05 06:58:25.345727 | orchestrator | Sunday 05 April 2026 06:57:55 +0000 (0:00:02.359) 0:02:43.253 ********** 2026-04-05 06:58:25.345738 | orchestrator | included: service-image-info for testbed-node-3 2026-04-05 06:58:25.345748 | orchestrator | 2026-04-05 06:58:25.345759 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-05 06:58:25.345770 | orchestrator | Sunday 05 April 2026 06:57:57 +0000 (0:00:02.077) 0:02:45.330 ********** 2026-04-05 06:58:25.345780 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:58:25.345794 | orchestrator | 2026-04-05 06:58:25.345806 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-05 06:58:25.345819 | orchestrator | Sunday 05 April 2026 06:58:01 +0000 (0:00:04.412) 0:02:49.743 ********** 2026-04-05 06:58:25.345832 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:58:25.345845 | orchestrator | 2026-04-05 06:58:25.345859 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-05 06:58:25.345872 | orchestrator | Sunday 05 April 2026 06:58:05 +0000 (0:00:03.029) 0:02:52.772 ********** 2026-04-05 06:58:25.345885 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:25.345897 | orchestrator | 2026-04-05 06:58:25.345910 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-05 06:58:25.345923 | orchestrator | Sunday 05 April 2026 06:58:07 +0000 (0:00:02.946) 0:02:55.718 ********** 2026-04-05 06:58:25.345944 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:25.345957 | orchestrator | 2026-04-05 06:58:25.345970 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-05 06:58:25.345983 | orchestrator | Sunday 05 April 2026 06:58:10 +0000 (0:00:03.005) 0:02:58.724 ********** 2026-04-05 06:58:25.345995 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:25.346008 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:25.346109 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:25.346124 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:58:25.346136 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:58:25.346147 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:58:25.346158 | orchestrator | 2026-04-05 06:58:25.346168 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-05 06:58:25.346179 | orchestrator | Sunday 05 April 2026 06:58:15 +0000 (0:00:04.886) 0:03:03.611 ********** 2026-04-05 06:58:25.346190 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:25.346201 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:25.346212 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:25.346222 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:25.346233 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:58:25.346244 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:58:25.346255 | orchestrator | 2026-04-05 06:58:25.346266 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-05 06:58:25.346276 | orchestrator | Sunday 05 April 2026 06:58:20 +0000 (0:00:04.829) 0:03:08.440 ********** 2026-04-05 06:58:25.346287 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:58:25.346298 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:25.346309 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:58:25.346320 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:25.346331 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:25.346360 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:25.346372 | orchestrator | 2026-04-05 06:58:25.346383 | orchestrator | TASK [nova-cell : Stopping nova cell services] ********************************* 2026-04-05 06:58:25.346394 | orchestrator | Sunday 05 April 2026 06:58:24 +0000 (0:00:03.618) 0:03:12.060 ********** 2026-04-05 06:58:25.346407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:25.346426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:25.346440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:25.346463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:25.346474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:25.346494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:36.131894 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:36.132093 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:58:36.132151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:36.132191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:36.132250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:36.132276 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:58:36.132297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:36.132320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:58:36.132342 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:36.132388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:36.132412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:58:36.132449 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:36.132483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:36.132505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:58:36.132526 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:36.132549 | orchestrator | 2026-04-05 06:58:36.132572 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-05 06:58:36.132595 | orchestrator | Sunday 05 April 2026 06:58:27 +0000 (0:00:03.485) 0:03:15.545 ********** 2026-04-05 06:58:36.132617 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:36.132639 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:36.132661 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:36.132683 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:58:36.132705 | orchestrator | 2026-04-05 06:58:36.132726 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-05 06:58:36.132747 | orchestrator | Sunday 05 April 2026 06:58:29 +0000 (0:00:02.163) 0:03:17.708 ********** 2026-04-05 06:58:36.132770 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-05 06:58:36.132791 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-05 06:58:36.132812 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-05 06:58:36.132832 | orchestrator | 2026-04-05 06:58:36.132852 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-05 06:58:36.132872 | orchestrator | Sunday 05 April 2026 06:58:31 +0000 (0:00:01.988) 0:03:19.697 ********** 2026-04-05 06:58:36.132893 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-05 06:58:36.132915 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-05 06:58:36.132935 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-05 06:58:36.132954 | orchestrator | 2026-04-05 06:58:36.132976 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-05 06:58:36.132997 | orchestrator | Sunday 05 April 2026 06:58:34 +0000 (0:00:02.196) 0:03:21.893 ********** 2026-04-05 06:58:36.133019 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-05 06:58:36.133031 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:36.133070 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-05 06:58:36.133083 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:58:36.133094 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-05 06:58:36.133104 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:58:36.133115 | orchestrator | 2026-04-05 06:58:36.133126 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-05 06:58:36.133137 | orchestrator | Sunday 05 April 2026 06:58:35 +0000 (0:00:01.515) 0:03:23.409 ********** 2026-04-05 06:58:36.133158 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 06:58:36.133169 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 06:58:36.133180 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:36.133201 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 06:58:45.363942 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 06:58:45.364098 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:45.364118 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 06:58:45.364130 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 06:58:45.364140 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-05 06:58:45.364150 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-05 06:58:45.364160 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:45.364169 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-05 06:58:45.364179 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 06:58:45.364189 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 06:58:45.364198 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-05 06:58:45.364208 | orchestrator | 2026-04-05 06:58:45.364218 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-05 06:58:45.364243 | orchestrator | Sunday 05 April 2026 06:58:37 +0000 (0:00:02.022) 0:03:25.432 ********** 2026-04-05 06:58:45.364253 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:45.364263 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:45.364273 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:58:45.364283 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:45.364292 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:58:45.364302 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:58:45.364311 | orchestrator | 2026-04-05 06:58:45.364321 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-05 06:58:45.364331 | orchestrator | Sunday 05 April 2026 06:58:39 +0000 (0:00:02.296) 0:03:27.728 ********** 2026-04-05 06:58:45.364341 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:45.364350 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:45.364360 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:45.364369 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:58:45.364379 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:58:45.364388 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:58:45.364398 | orchestrator | 2026-04-05 06:58:45.364407 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-05 06:58:45.364417 | orchestrator | Sunday 05 April 2026 06:58:43 +0000 (0:00:03.440) 0:03:31.169 ********** 2026-04-05 06:58:45.364430 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364443 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364477 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364506 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364523 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364534 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364545 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364563 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:58:45.364580 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497539 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497683 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497692 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497699 | orchestrator | 2026-04-05 06:58:51.497708 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:58:51.497716 | orchestrator | Sunday 05 April 2026 06:58:47 +0000 (0:00:03.667) 0:03:34.837 ********** 2026-04-05 06:58:51.497736 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 06:58:51.497745 | orchestrator | 2026-04-05 06:58:51.497752 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-05 06:58:51.497759 | orchestrator | Sunday 05 April 2026 06:58:49 +0000 (0:00:02.182) 0:03:37.020 ********** 2026-04-05 06:58:51.497771 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497779 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497792 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:58:51.497812 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524575 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524657 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524667 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524693 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524701 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524708 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524728 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524741 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524748 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524760 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:58:55.524767 | orchestrator | 2026-04-05 06:58:55.524774 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-05 06:58:55.524782 | orchestrator | Sunday 05 April 2026 06:58:54 +0000 (0:00:04.836) 0:03:41.856 ********** 2026-04-05 06:58:55.524791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:55.524803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:56.264581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:56.264713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:56.264739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:56.264759 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:58:56.264780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:56.264801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:56.264853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:56.264878 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:58:56.264898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:56.264937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:56.264950 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:56.264962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:56.264973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:58:56.264985 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:58:56.264999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:58:56.265019 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:58:56.265118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:59.404339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:58:59.404474 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:58:59.404492 | orchestrator | 2026-04-05 06:58:59.404504 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-05 06:58:59.404516 | orchestrator | Sunday 05 April 2026 06:58:57 +0000 (0:00:03.393) 0:03:45.250 ********** 2026-04-05 06:58:59.404530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:59.404660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:59.404675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:59.404688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:58:59.404736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:59.404762 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:58:59.404774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:58:59.404786 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:58:59.404797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:58:59.404810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:59.404823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:58:59.404848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:59:29.189613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:59:29.189766 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:59:29.189800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 06:59:29.189822 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:59:29.189842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:59:29.189861 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:59:29.189880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 06:59:29.189900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 06:59:29.189954 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:59:29.189973 | orchestrator | 2026-04-05 06:59:29.189991 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-05 06:59:29.190010 | orchestrator | Sunday 05 April 2026 06:59:01 +0000 (0:00:04.010) 0:03:49.260 ********** 2026-04-05 06:59:29.190150 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:59:29.190165 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:59:29.190188 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:59:29.190203 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:59:29.190216 | orchestrator | 2026-04-05 06:59:29.190244 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-05 06:59:29.190257 | orchestrator | Sunday 05 April 2026 06:59:03 +0000 (0:00:02.333) 0:03:51.594 ********** 2026-04-05 06:59:29.190271 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:59:29.190304 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 06:59:29.190318 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 06:59:29.190330 | orchestrator | 2026-04-05 06:59:29.190344 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-05 06:59:29.190357 | orchestrator | Sunday 05 April 2026 06:59:05 +0000 (0:00:02.079) 0:03:53.674 ********** 2026-04-05 06:59:29.190369 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:59:29.190382 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 06:59:29.190394 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 06:59:29.190406 | orchestrator | 2026-04-05 06:59:29.190420 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-05 06:59:29.190432 | orchestrator | Sunday 05 April 2026 06:59:07 +0000 (0:00:02.016) 0:03:55.690 ********** 2026-04-05 06:59:29.190445 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:59:29.190458 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:59:29.190471 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:59:29.190482 | orchestrator | 2026-04-05 06:59:29.190493 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-05 06:59:29.190503 | orchestrator | Sunday 05 April 2026 06:59:09 +0000 (0:00:01.777) 0:03:57.468 ********** 2026-04-05 06:59:29.190514 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:59:29.190524 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:59:29.190535 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:59:29.190545 | orchestrator | 2026-04-05 06:59:29.190556 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-05 06:59:29.190567 | orchestrator | Sunday 05 April 2026 06:59:11 +0000 (0:00:01.687) 0:03:59.156 ********** 2026-04-05 06:59:29.190577 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-05 06:59:29.190588 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-05 06:59:29.190599 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-05 06:59:29.190609 | orchestrator | 2026-04-05 06:59:29.190620 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-05 06:59:29.190630 | orchestrator | Sunday 05 April 2026 06:59:13 +0000 (0:00:02.185) 0:04:01.342 ********** 2026-04-05 06:59:29.190641 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-05 06:59:29.190651 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-05 06:59:29.190662 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-05 06:59:29.190673 | orchestrator | 2026-04-05 06:59:29.190683 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-05 06:59:29.190694 | orchestrator | Sunday 05 April 2026 06:59:15 +0000 (0:00:02.175) 0:04:03.517 ********** 2026-04-05 06:59:29.190704 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-05 06:59:29.190715 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-05 06:59:29.190725 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-05 06:59:29.190749 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-05 06:59:29.190759 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-05 06:59:29.190770 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-05 06:59:29.190780 | orchestrator | 2026-04-05 06:59:29.190791 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-05 06:59:29.190802 | orchestrator | Sunday 05 April 2026 06:59:20 +0000 (0:00:04.963) 0:04:08.480 ********** 2026-04-05 06:59:29.190812 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:59:29.190823 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:59:29.190833 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:59:29.190844 | orchestrator | 2026-04-05 06:59:29.190855 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-05 06:59:29.190865 | orchestrator | Sunday 05 April 2026 06:59:22 +0000 (0:00:01.360) 0:04:09.841 ********** 2026-04-05 06:59:29.190876 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:59:29.190887 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:59:29.190897 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:59:29.190908 | orchestrator | 2026-04-05 06:59:29.190918 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-05 06:59:29.190929 | orchestrator | Sunday 05 April 2026 06:59:23 +0000 (0:00:01.412) 0:04:11.253 ********** 2026-04-05 06:59:29.190940 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:59:29.190950 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:59:29.190961 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:59:29.190971 | orchestrator | 2026-04-05 06:59:29.190982 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-05 06:59:29.190993 | orchestrator | Sunday 05 April 2026 06:59:26 +0000 (0:00:02.620) 0:04:13.873 ********** 2026-04-05 06:59:29.191005 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 06:59:29.191017 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 06:59:29.191028 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-05 06:59:29.191039 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 06:59:29.191055 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 06:59:29.191073 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-05 06:59:50.917895 | orchestrator | 2026-04-05 06:59:50.918001 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-05 06:59:50.918055 | orchestrator | Sunday 05 April 2026 06:59:30 +0000 (0:00:04.183) 0:04:18.057 ********** 2026-04-05 06:59:50.918064 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 06:59:50.918072 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 06:59:50.918078 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:59:50.918085 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-05 06:59:50.918091 | orchestrator | ok: [testbed-node-3] 2026-04-05 06:59:50.918098 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-05 06:59:50.918105 | orchestrator | ok: [testbed-node-4] 2026-04-05 06:59:50.918162 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-05 06:59:50.918169 | orchestrator | ok: [testbed-node-5] 2026-04-05 06:59:50.918193 | orchestrator | 2026-04-05 06:59:50.918200 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-05 06:59:50.918207 | orchestrator | Sunday 05 April 2026 06:59:34 +0000 (0:00:04.316) 0:04:22.373 ********** 2026-04-05 06:59:50.918213 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:59:50.918219 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:59:50.918225 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:59:50.918232 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 06:59:50.918239 | orchestrator | 2026-04-05 06:59:50.918245 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-05 06:59:50.918251 | orchestrator | Sunday 05 April 2026 06:59:38 +0000 (0:00:03.484) 0:04:25.858 ********** 2026-04-05 06:59:50.918258 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:59:50.918264 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 06:59:50.918270 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 06:59:50.918277 | orchestrator | 2026-04-05 06:59:50.918283 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-05 06:59:50.918289 | orchestrator | Sunday 05 April 2026 06:59:40 +0000 (0:00:02.026) 0:04:27.885 ********** 2026-04-05 06:59:50.918295 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:59:50.918302 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:59:50.918308 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:59:50.918314 | orchestrator | 2026-04-05 06:59:50.918320 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-05 06:59:50.918326 | orchestrator | Sunday 05 April 2026 06:59:41 +0000 (0:00:01.401) 0:04:29.286 ********** 2026-04-05 06:59:50.918332 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:59:50.918338 | orchestrator | 2026-04-05 06:59:50.918344 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-05 06:59:50.918351 | orchestrator | Sunday 05 April 2026 06:59:42 +0000 (0:00:01.152) 0:04:30.438 ********** 2026-04-05 06:59:50.918357 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:59:50.918363 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:59:50.918369 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:59:50.918375 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:59:50.918381 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:59:50.918387 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:59:50.918393 | orchestrator | 2026-04-05 06:59:50.918400 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-05 06:59:50.918406 | orchestrator | Sunday 05 April 2026 06:59:44 +0000 (0:00:01.812) 0:04:32.251 ********** 2026-04-05 06:59:50.918412 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 06:59:50.918418 | orchestrator | 2026-04-05 06:59:50.918424 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-05 06:59:50.918431 | orchestrator | Sunday 05 April 2026 06:59:46 +0000 (0:00:01.795) 0:04:34.046 ********** 2026-04-05 06:59:50.918437 | orchestrator | skipping: [testbed-node-3] 2026-04-05 06:59:50.918443 | orchestrator | skipping: [testbed-node-4] 2026-04-05 06:59:50.918449 | orchestrator | skipping: [testbed-node-5] 2026-04-05 06:59:50.918455 | orchestrator | skipping: [testbed-node-0] 2026-04-05 06:59:50.918461 | orchestrator | skipping: [testbed-node-1] 2026-04-05 06:59:50.918467 | orchestrator | skipping: [testbed-node-2] 2026-04-05 06:59:50.918473 | orchestrator | 2026-04-05 06:59:50.918480 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-05 06:59:50.918486 | orchestrator | Sunday 05 April 2026 06:59:48 +0000 (0:00:02.072) 0:04:36.119 ********** 2026-04-05 06:59:50.918495 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:59:50.918534 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:59:50.918542 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 06:59:50.918549 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:59:50.918557 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:59:50.918565 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:59:50.918579 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:59:50.918591 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293544 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293652 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293664 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293710 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293735 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293746 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 06:59:54.293755 | orchestrator | 2026-04-05 06:59:54.293765 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-05 06:59:54.293775 | orchestrator | Sunday 05 April 2026 06:59:53 +0000 (0:00:04.740) 0:04:40.859 ********** 2026-04-05 06:59:54.293785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:59:54.293795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:59:54.293812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 06:59:54.293826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 06:59:54.293841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 07:00:06.679824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 07:00:06.679945 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680024 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680053 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680065 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680094 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680107 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680119 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680172 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680184 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:06.680196 | orchestrator | 2026-04-05 07:00:06.680214 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-05 07:00:06.680226 | orchestrator | Sunday 05 April 2026 07:00:01 +0000 (0:00:08.101) 0:04:48.961 ********** 2026-04-05 07:00:06.680238 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:00:06.680250 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:00:06.680261 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:00:06.680272 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:06.680283 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:06.680294 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:06.680305 | orchestrator | 2026-04-05 07:00:06.680316 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-05 07:00:06.680327 | orchestrator | Sunday 05 April 2026 07:00:04 +0000 (0:00:03.017) 0:04:51.978 ********** 2026-04-05 07:00:06.680338 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 07:00:06.680350 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 07:00:06.680361 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 07:00:06.680372 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-05 07:00:06.680386 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 07:00:06.680401 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 07:00:06.680415 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:06.680429 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-05 07:00:06.680442 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 07:00:06.680456 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:06.680477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-05 07:00:36.076409 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.076527 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 07:00:36.076544 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 07:00:36.076556 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-05 07:00:36.076567 | orchestrator | 2026-04-05 07:00:36.076603 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-05 07:00:36.076615 | orchestrator | Sunday 05 April 2026 07:00:08 +0000 (0:00:04.560) 0:04:56.538 ********** 2026-04-05 07:00:36.076625 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:00:36.076636 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:00:36.076647 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:00:36.076657 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:36.076668 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:36.076679 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.076689 | orchestrator | 2026-04-05 07:00:36.076701 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-05 07:00:36.076711 | orchestrator | Sunday 05 April 2026 07:00:10 +0000 (0:00:01.812) 0:04:58.351 ********** 2026-04-05 07:00:36.076722 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 07:00:36.076734 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 07:00:36.076745 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-05 07:00:36.076755 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 07:00:36.076768 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 07:00:36.076779 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-05 07:00:36.076789 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 07:00:36.076800 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 07:00:36.076810 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-05 07:00:36.076821 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 07:00:36.076831 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:36.076842 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 07:00:36.076853 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:36.076869 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-05 07:00:36.076888 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.076907 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 07:00:36.076943 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 07:00:36.076963 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-05 07:00:36.076982 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 07:00:36.077000 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 07:00:36.077020 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-05 07:00:36.077034 | orchestrator | 2026-04-05 07:00:36.077047 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-05 07:00:36.077061 | orchestrator | Sunday 05 April 2026 07:00:16 +0000 (0:00:05.994) 0:05:04.346 ********** 2026-04-05 07:00:36.077075 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 07:00:36.077098 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 07:00:36.077112 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-05 07:00:36.077125 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 07:00:36.077137 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 07:00:36.077183 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 07:00:36.077197 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 07:00:36.077210 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-05 07:00:36.077224 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-05 07:00:36.077254 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 07:00:36.077270 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 07:00:36.077282 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-05 07:00:36.077296 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 07:00:36.077309 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 07:00:36.077320 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 07:00:36.077331 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:36.077342 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 07:00:36.077352 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:36.077363 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-05 07:00:36.077374 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.077385 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-05 07:00:36.077395 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 07:00:36.077406 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 07:00:36.077416 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-05 07:00:36.077427 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 07:00:36.077438 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 07:00:36.077449 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-05 07:00:36.077460 | orchestrator | 2026-04-05 07:00:36.077470 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-05 07:00:36.077481 | orchestrator | Sunday 05 April 2026 07:00:25 +0000 (0:00:08.452) 0:05:12.798 ********** 2026-04-05 07:00:36.077492 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:00:36.077503 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:00:36.077513 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:00:36.077524 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:36.077535 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:36.077545 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.077556 | orchestrator | 2026-04-05 07:00:36.077567 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-05 07:00:36.077578 | orchestrator | Sunday 05 April 2026 07:00:26 +0000 (0:00:01.924) 0:05:14.722 ********** 2026-04-05 07:00:36.077589 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:00:36.077600 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:00:36.077611 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:00:36.077621 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:36.077632 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:36.077650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.077661 | orchestrator | 2026-04-05 07:00:36.077672 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-05 07:00:36.077683 | orchestrator | Sunday 05 April 2026 07:00:29 +0000 (0:00:02.090) 0:05:16.813 ********** 2026-04-05 07:00:36.077693 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:36.077704 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:00:36.077716 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:36.077727 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.077738 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:00:36.077748 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:00:36.077759 | orchestrator | 2026-04-05 07:00:36.077776 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-05 07:00:36.077787 | orchestrator | Sunday 05 April 2026 07:00:32 +0000 (0:00:03.156) 0:05:19.970 ********** 2026-04-05 07:00:36.077798 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:36.077808 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:36.077819 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:36.077830 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:00:36.077840 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:00:36.077851 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:00:36.077862 | orchestrator | 2026-04-05 07:00:36.077873 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-05 07:00:36.077884 | orchestrator | Sunday 05 April 2026 07:00:35 +0000 (0:00:03.138) 0:05:23.109 ********** 2026-04-05 07:00:36.077898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 07:00:36.077922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 07:00:37.270399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 07:00:37.270506 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:00:37.270555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 07:00:37.270584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 07:00:37.270597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 07:00:37.270609 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:00:37.270621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 07:00:37.270653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 07:00:37.270666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 07:00:37.270689 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:00:37.270702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 07:00:37.270718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:00:37.270730 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:37.270742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 07:00:37.270753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:00:37.270765 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:37.270783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 07:00:43.235262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:00:43.235444 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:43.235479 | orchestrator | 2026-04-05 07:00:43.235500 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-05 07:00:43.235522 | orchestrator | Sunday 05 April 2026 07:00:38 +0000 (0:00:03.022) 0:05:26.131 ********** 2026-04-05 07:00:43.235544 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-05 07:00:43.235564 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 07:00:43.235584 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:00:43.235597 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-05 07:00:43.235608 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 07:00:43.235619 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:00:43.235630 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-05 07:00:43.235642 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 07:00:43.235652 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:00:43.235663 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 07:00:43.235674 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 07:00:43.235684 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:43.235728 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 07:00:43.235743 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 07:00:43.235755 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:43.235769 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 07:00:43.235782 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 07:00:43.235794 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:43.235807 | orchestrator | 2026-04-05 07:00:43.235821 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-05 07:00:43.235833 | orchestrator | Sunday 05 April 2026 07:00:40 +0000 (0:00:01.996) 0:05:28.127 ********** 2026-04-05 07:00:43.235849 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 07:00:43.235864 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 07:00:43.235931 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-05 07:00:43.235947 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 07:00:43.235989 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 07:00:43.236004 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 07:00:43.236016 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-05 07:00:43.236039 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 07:00:43.236063 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-05 07:00:49.042385 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:49.042554 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:49.042571 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:49.042585 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:49.042628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:49.042662 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:00:49.042675 | orchestrator | 2026-04-05 07:00:49.042687 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-05 07:00:49.042700 | orchestrator | Sunday 05 April 2026 07:00:45 +0000 (0:00:05.292) 0:05:33.420 ********** 2026-04-05 07:00:49.042713 | orchestrator | ok: [testbed-node-3] => { 2026-04-05 07:00:49.042725 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:00:49.042736 | orchestrator | } 2026-04-05 07:00:49.042747 | orchestrator | ok: [testbed-node-4] => { 2026-04-05 07:00:49.042757 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:00:49.042768 | orchestrator | } 2026-04-05 07:00:49.042779 | orchestrator | ok: [testbed-node-5] => { 2026-04-05 07:00:49.042789 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:00:49.042800 | orchestrator | } 2026-04-05 07:00:49.042811 | orchestrator | ok: [testbed-node-0] => { 2026-04-05 07:00:49.042821 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:00:49.042832 | orchestrator | } 2026-04-05 07:00:49.042842 | orchestrator | ok: [testbed-node-1] => { 2026-04-05 07:00:49.042853 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:00:49.042863 | orchestrator | } 2026-04-05 07:00:49.042874 | orchestrator | ok: [testbed-node-2] => { 2026-04-05 07:00:49.042887 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:00:49.042900 | orchestrator | } 2026-04-05 07:00:49.042913 | orchestrator | 2026-04-05 07:00:49.042926 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:00:49.042939 | orchestrator | Sunday 05 April 2026 07:00:47 +0000 (0:00:01.939) 0:05:35.359 ********** 2026-04-05 07:00:49.042958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 07:00:49.042983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 07:00:49.042999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 07:00:49.043013 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:00:49.043035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 07:00:53.189818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 07:00:53.189968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 07:00:53.189985 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:00:53.189999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 07:00:53.190101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:00:53.190114 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:00:53.190124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 07:00:53.190155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-05 07:00:53.190227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:00:53.190238 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:00:53.190255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-05 07:00:53.190274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-05 07:00:53.190284 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:00:53.190294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-05 07:00:53.190304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:00:53.190315 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:00:53.190327 | orchestrator | 2026-04-05 07:00:53.190341 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 07:00:53.190354 | orchestrator | Sunday 05 April 2026 07:00:51 +0000 (0:00:04.143) 0:05:39.503 ********** 2026-04-05 07:00:53.190365 | orchestrator | 2026-04-05 07:00:53.190377 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 07:00:53.190388 | orchestrator | Sunday 05 April 2026 07:00:52 +0000 (0:00:00.502) 0:05:40.005 ********** 2026-04-05 07:00:53.190400 | orchestrator | 2026-04-05 07:00:53.190411 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 07:00:53.190422 | orchestrator | Sunday 05 April 2026 07:00:53 +0000 (0:00:00.768) 0:05:40.774 ********** 2026-04-05 07:00:53.190434 | orchestrator | 2026-04-05 07:00:53.190453 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 07:02:32.323781 | orchestrator | Sunday 05 April 2026 07:00:53 +0000 (0:00:00.537) 0:05:41.312 ********** 2026-04-05 07:02:32.323937 | orchestrator | 2026-04-05 07:02:32.323955 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 07:02:32.323967 | orchestrator | Sunday 05 April 2026 07:00:54 +0000 (0:00:00.515) 0:05:41.827 ********** 2026-04-05 07:02:32.323978 | orchestrator | 2026-04-05 07:02:32.323989 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-05 07:02:32.324000 | orchestrator | Sunday 05 April 2026 07:00:54 +0000 (0:00:00.527) 0:05:42.354 ********** 2026-04-05 07:02:32.324011 | orchestrator | 2026-04-05 07:02:32.324022 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-05 07:02:32.324062 | orchestrator | 2026-04-05 07:02:32.324074 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-05 07:02:32.324085 | orchestrator | Sunday 05 April 2026 07:00:56 +0000 (0:00:02.319) 0:05:44.673 ********** 2026-04-05 07:02:32.324096 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:32.324108 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:32.324119 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:32.324129 | orchestrator | 2026-04-05 07:02:32.324140 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-05 07:02:32.324151 | orchestrator | 2026-04-05 07:02:32.324161 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-05 07:02:32.324172 | orchestrator | Sunday 05 April 2026 07:00:58 +0000 (0:00:01.697) 0:05:46.372 ********** 2026-04-05 07:02:32.324182 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:32.324193 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:32.324204 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:32.324215 | orchestrator | 2026-04-05 07:02:32.324310 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-05 07:02:32.324327 | orchestrator | 2026-04-05 07:02:32.324341 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-05 07:02:32.324354 | orchestrator | Sunday 05 April 2026 07:01:01 +0000 (0:00:02.777) 0:05:49.150 ********** 2026-04-05 07:02:32.324368 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-05 07:02:32.324382 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-05 07:02:32.324394 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-05 07:02:32.324408 | orchestrator | changed: [testbed-node-0] => (item=nova-conductor) 2026-04-05 07:02:32.324421 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-05 07:02:32.324434 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-05 07:02:32.324447 | orchestrator | changed: [testbed-node-1] => (item=nova-conductor) 2026-04-05 07:02:32.324460 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-05 07:02:32.324473 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-05 07:02:32.324486 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-05 07:02:32.324498 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-05 07:02:32.324512 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-05 07:02:32.324525 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-05 07:02:32.324538 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-05 07:02:32.324551 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-05 07:02:32.324563 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-05 07:02:32.324575 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-05 07:02:32.324589 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-05 07:02:32.324601 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-05 07:02:32.324613 | orchestrator | changed: [testbed-node-2] => (item=nova-conductor) 2026-04-05 07:02:32.324626 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-05 07:02:32.324639 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-05 07:02:32.324652 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-05 07:02:32.324662 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-05 07:02:32.324673 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-05 07:02:32.324684 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-05 07:02:32.324694 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-05 07:02:32.324705 | orchestrator | changed: [testbed-node-0] => (item=nova-novncproxy) 2026-04-05 07:02:32.324716 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-05 07:02:32.324736 | orchestrator | changed: [testbed-node-1] => (item=nova-novncproxy) 2026-04-05 07:02:32.324748 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-05 07:02:32.324758 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-05 07:02:32.324769 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-05 07:02:32.324779 | orchestrator | changed: [testbed-node-2] => (item=nova-novncproxy) 2026-04-05 07:02:32.324790 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-05 07:02:32.324800 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-05 07:02:32.324811 | orchestrator | 2026-04-05 07:02:32.324822 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-05 07:02:32.324832 | orchestrator | 2026-04-05 07:02:32.324843 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-05 07:02:32.324854 | orchestrator | Sunday 05 April 2026 07:01:40 +0000 (0:00:38.752) 0:06:27.903 ********** 2026-04-05 07:02:32.324865 | orchestrator | changed: [testbed-node-1] => (item=nova-scheduler) 2026-04-05 07:02:32.324895 | orchestrator | changed: [testbed-node-0] => (item=nova-scheduler) 2026-04-05 07:02:32.324907 | orchestrator | changed: [testbed-node-2] => (item=nova-scheduler) 2026-04-05 07:02:32.324918 | orchestrator | changed: [testbed-node-0] => (item=nova-api) 2026-04-05 07:02:32.324928 | orchestrator | changed: [testbed-node-1] => (item=nova-api) 2026-04-05 07:02:32.324939 | orchestrator | changed: [testbed-node-2] => (item=nova-api) 2026-04-05 07:02:32.324949 | orchestrator | 2026-04-05 07:02:32.324960 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-05 07:02:32.324971 | orchestrator | 2026-04-05 07:02:32.324981 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-05 07:02:32.324992 | orchestrator | Sunday 05 April 2026 07:02:00 +0000 (0:00:20.301) 0:06:48.204 ********** 2026-04-05 07:02:32.325003 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:02:32.325014 | orchestrator | 2026-04-05 07:02:32.325024 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-05 07:02:32.325035 | orchestrator | 2026-04-05 07:02:32.325046 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-05 07:02:32.325056 | orchestrator | Sunday 05 April 2026 07:02:17 +0000 (0:00:16.908) 0:07:05.112 ********** 2026-04-05 07:02:32.325067 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:32.325077 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:32.325088 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:02:32.325099 | orchestrator | 2026-04-05 07:02:32.325109 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:02:32.325120 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 07:02:32.325140 | orchestrator | testbed-node-0 : ok=39  changed=8  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-05 07:02:32.325151 | orchestrator | testbed-node-1 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-05 07:02:32.325162 | orchestrator | testbed-node-2 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-05 07:02:32.325172 | orchestrator | testbed-node-3 : ok=43  changed=5  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-05 07:02:32.325183 | orchestrator | testbed-node-4 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-05 07:02:32.325193 | orchestrator | testbed-node-5 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-05 07:02:32.325212 | orchestrator | 2026-04-05 07:02:32.325223 | orchestrator | 2026-04-05 07:02:32.325254 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:02:32.325265 | orchestrator | Sunday 05 April 2026 07:02:31 +0000 (0:00:14.526) 0:07:19.639 ********** 2026-04-05 07:02:32.325275 | orchestrator | =============================================================================== 2026-04-05 07:02:32.325286 | orchestrator | nova-cell : Reload nova cell services to remove RPC version cap -------- 38.75s 2026-04-05 07:02:32.325297 | orchestrator | nova : Reload nova API services to remove RPC version pin -------------- 20.30s 2026-04-05 07:02:32.325307 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.97s 2026-04-05 07:02:32.325318 | orchestrator | nova : Run Nova upgrade checks ----------------------------------------- 18.18s 2026-04-05 07:02:32.325328 | orchestrator | nova : Run Nova API online database migrations ------------------------- 16.91s 2026-04-05 07:02:32.325339 | orchestrator | nova-cell : Run Nova cell online database migrations ------------------- 14.53s 2026-04-05 07:02:32.325350 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 12.56s 2026-04-05 07:02:32.325360 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.95s 2026-04-05 07:02:32.325371 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.45s 2026-04-05 07:02:32.325381 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.10s 2026-04-05 07:02:32.325392 | orchestrator | nova : Copying over config.json files for services ---------------------- 6.23s 2026-04-05 07:02:32.325402 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 5.99s 2026-04-05 07:02:32.325413 | orchestrator | service-check-containers : nova_cell | Check containers ----------------- 5.29s 2026-04-05 07:02:32.325423 | orchestrator | nova-cell : Flush handlers ---------------------------------------------- 5.17s 2026-04-05 07:02:32.325434 | orchestrator | service-check-containers : nova | Check containers ---------------------- 5.08s 2026-04-05 07:02:32.325444 | orchestrator | nova-cell : Copy over ceph.conf ----------------------------------------- 4.96s 2026-04-05 07:02:32.325455 | orchestrator | nova-cell : Get container facts ----------------------------------------- 4.89s 2026-04-05 07:02:32.325465 | orchestrator | service-cert-copy : nova | Copying over extra CA certificates ----------- 4.84s 2026-04-05 07:02:32.325476 | orchestrator | nova-cell : Get current Libvirt version --------------------------------- 4.83s 2026-04-05 07:02:32.325487 | orchestrator | nova-cell : Copying over config.json files for services ----------------- 4.74s 2026-04-05 07:02:32.513200 | orchestrator | + osism apply -a upgrade horizon 2026-04-05 07:02:33.927036 | orchestrator | 2026-04-05 07:02:33 | INFO  | Prepare task for execution of horizon. 2026-04-05 07:02:33.990418 | orchestrator | 2026-04-05 07:02:33 | INFO  | Task fe8caa55-7601-4143-b149-b3e7e4e10e1d (horizon) was prepared for execution. 2026-04-05 07:02:33.990533 | orchestrator | 2026-04-05 07:02:33 | INFO  | It takes a moment until task fe8caa55-7601-4143-b149-b3e7e4e10e1d (horizon) has been started and output is visible here. 2026-04-05 07:02:43.032376 | orchestrator | 2026-04-05 07:02:43.032489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:02:43.032502 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 07:02:43.032511 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 07:02:43.032527 | orchestrator | 2026-04-05 07:02:43.032535 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:02:43.032543 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 07:02:43.032550 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 07:02:43.032615 | orchestrator | Sunday 05 April 2026 07:02:38 +0000 (0:00:01.127) 0:00:01.127 ********** 2026-04-05 07:02:43.032624 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:43.032634 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:43.032642 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:43.032649 | orchestrator | 2026-04-05 07:02:43.032668 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:02:43.032676 | orchestrator | Sunday 05 April 2026 07:02:39 +0000 (0:00:00.989) 0:00:02.116 ********** 2026-04-05 07:02:43.032683 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-05 07:02:43.032691 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-05 07:02:43.032698 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-05 07:02:43.032705 | orchestrator | 2026-04-05 07:02:43.032712 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-05 07:02:43.032720 | orchestrator | 2026-04-05 07:02:43.032727 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 07:02:43.032734 | orchestrator | Sunday 05 April 2026 07:02:40 +0000 (0:00:00.766) 0:00:02.883 ********** 2026-04-05 07:02:43.032741 | orchestrator | included: /ansible/roles/horizon/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:02:43.032750 | orchestrator | 2026-04-05 07:02:43.032757 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-05 07:02:43.032764 | orchestrator | Sunday 05 April 2026 07:02:41 +0000 (0:00:01.265) 0:00:04.148 ********** 2026-04-05 07:02:43.032778 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:02:43.032810 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:02:43.032832 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:02:49.614074 | orchestrator | 2026-04-05 07:02:49.614183 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-05 07:02:49.614200 | orchestrator | Sunday 05 April 2026 07:02:43 +0000 (0:00:01.701) 0:00:05.850 ********** 2026-04-05 07:02:49.614213 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:49.614225 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:49.614236 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:49.614322 | orchestrator | 2026-04-05 07:02:49.614335 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 07:02:49.614346 | orchestrator | Sunday 05 April 2026 07:02:43 +0000 (0:00:00.304) 0:00:06.155 ********** 2026-04-05 07:02:49.614357 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 07:02:49.614369 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 07:02:49.614381 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 07:02:49.614406 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 07:02:49.614418 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 07:02:49.614428 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 07:02:49.614439 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-05 07:02:49.614450 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 07:02:49.614461 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 07:02:49.614471 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 07:02:49.614482 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 07:02:49.614493 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 07:02:49.614504 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 07:02:49.614514 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 07:02:49.614525 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-05 07:02:49.614538 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 07:02:49.614550 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-05 07:02:49.614563 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-05 07:02:49.614575 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-05 07:02:49.614588 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-05 07:02:49.614600 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-05 07:02:49.614614 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-05 07:02:49.614627 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-05 07:02:49.614639 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-05 07:02:49.614651 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-05 07:02:49.614664 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-05 07:02:49.614675 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-05 07:02:49.614708 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-05 07:02:49.614719 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-05 07:02:49.614730 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-05 07:02:49.614741 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-05 07:02:49.614752 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-05 07:02:49.614763 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-05 07:02:49.614794 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-05 07:02:49.614805 | orchestrator | 2026-04-05 07:02:49.614816 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:02:49.614827 | orchestrator | Sunday 05 April 2026 07:02:44 +0000 (0:00:01.382) 0:00:07.538 ********** 2026-04-05 07:02:49.614838 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:49.614849 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:49.614859 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:49.614870 | orchestrator | 2026-04-05 07:02:49.614881 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:02:49.614892 | orchestrator | Sunday 05 April 2026 07:02:45 +0000 (0:00:00.340) 0:00:07.879 ********** 2026-04-05 07:02:49.614903 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.614915 | orchestrator | 2026-04-05 07:02:49.614926 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:02:49.614937 | orchestrator | Sunday 05 April 2026 07:02:45 +0000 (0:00:00.120) 0:00:07.999 ********** 2026-04-05 07:02:49.614947 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.614964 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:49.614974 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:49.614985 | orchestrator | 2026-04-05 07:02:49.614996 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:02:49.615007 | orchestrator | Sunday 05 April 2026 07:02:45 +0000 (0:00:00.299) 0:00:08.298 ********** 2026-04-05 07:02:49.615021 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:49.615039 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:49.615056 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:49.615067 | orchestrator | 2026-04-05 07:02:49.615078 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:02:49.615089 | orchestrator | Sunday 05 April 2026 07:02:46 +0000 (0:00:00.510) 0:00:08.809 ********** 2026-04-05 07:02:49.615100 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615111 | orchestrator | 2026-04-05 07:02:49.615121 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:02:49.615132 | orchestrator | Sunday 05 April 2026 07:02:46 +0000 (0:00:00.143) 0:00:08.952 ********** 2026-04-05 07:02:49.615143 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615154 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:49.615164 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:49.615175 | orchestrator | 2026-04-05 07:02:49.615186 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:02:49.615197 | orchestrator | Sunday 05 April 2026 07:02:46 +0000 (0:00:00.333) 0:00:09.286 ********** 2026-04-05 07:02:49.615217 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:49.615228 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:49.615260 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:49.615271 | orchestrator | 2026-04-05 07:02:49.615282 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:02:49.615293 | orchestrator | Sunday 05 April 2026 07:02:46 +0000 (0:00:00.329) 0:00:09.616 ********** 2026-04-05 07:02:49.615304 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615315 | orchestrator | 2026-04-05 07:02:49.615325 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:02:49.615336 | orchestrator | Sunday 05 April 2026 07:02:47 +0000 (0:00:00.124) 0:00:09.740 ********** 2026-04-05 07:02:49.615347 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615358 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:49.615368 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:49.615379 | orchestrator | 2026-04-05 07:02:49.615390 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:02:49.615401 | orchestrator | Sunday 05 April 2026 07:02:47 +0000 (0:00:00.527) 0:00:10.267 ********** 2026-04-05 07:02:49.615412 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:49.615423 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:49.615433 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:49.615444 | orchestrator | 2026-04-05 07:02:49.615455 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:02:49.615465 | orchestrator | Sunday 05 April 2026 07:02:47 +0000 (0:00:00.324) 0:00:10.592 ********** 2026-04-05 07:02:49.615476 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615487 | orchestrator | 2026-04-05 07:02:49.615498 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:02:49.615508 | orchestrator | Sunday 05 April 2026 07:02:48 +0000 (0:00:00.123) 0:00:10.716 ********** 2026-04-05 07:02:49.615519 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615530 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:49.615540 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:49.615551 | orchestrator | 2026-04-05 07:02:49.615562 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:02:49.615572 | orchestrator | Sunday 05 April 2026 07:02:48 +0000 (0:00:00.309) 0:00:11.025 ********** 2026-04-05 07:02:49.615583 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:49.615594 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:49.615605 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:49.615616 | orchestrator | 2026-04-05 07:02:49.615626 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:02:49.615637 | orchestrator | Sunday 05 April 2026 07:02:48 +0000 (0:00:00.517) 0:00:11.542 ********** 2026-04-05 07:02:49.615648 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615659 | orchestrator | 2026-04-05 07:02:49.615669 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:02:49.615680 | orchestrator | Sunday 05 April 2026 07:02:48 +0000 (0:00:00.135) 0:00:11.678 ********** 2026-04-05 07:02:49.615691 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:02:49.615702 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:02:49.615712 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:02:49.615723 | orchestrator | 2026-04-05 07:02:49.615734 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:02:49.615744 | orchestrator | Sunday 05 April 2026 07:02:49 +0000 (0:00:00.311) 0:00:11.989 ********** 2026-04-05 07:02:49.615755 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:02:49.615766 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:02:49.615777 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:02:49.615787 | orchestrator | 2026-04-05 07:02:49.615798 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:02:49.615816 | orchestrator | Sunday 05 April 2026 07:02:49 +0000 (0:00:00.337) 0:00:12.327 ********** 2026-04-05 07:03:04.545002 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545110 | orchestrator | 2026-04-05 07:03:04.545125 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:03:04.545136 | orchestrator | Sunday 05 April 2026 07:02:49 +0000 (0:00:00.150) 0:00:12.478 ********** 2026-04-05 07:03:04.545146 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545156 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:04.545166 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:04.545176 | orchestrator | 2026-04-05 07:03:04.545186 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:03:04.545196 | orchestrator | Sunday 05 April 2026 07:02:50 +0000 (0:00:00.513) 0:00:12.992 ********** 2026-04-05 07:03:04.545205 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:03:04.545216 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:03:04.545226 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:03:04.545236 | orchestrator | 2026-04-05 07:03:04.545317 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:03:04.545329 | orchestrator | Sunday 05 April 2026 07:02:50 +0000 (0:00:00.351) 0:00:13.343 ********** 2026-04-05 07:03:04.545339 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545349 | orchestrator | 2026-04-05 07:03:04.545359 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:03:04.545368 | orchestrator | Sunday 05 April 2026 07:02:50 +0000 (0:00:00.179) 0:00:13.522 ********** 2026-04-05 07:03:04.545378 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545388 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:04.545397 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:04.545407 | orchestrator | 2026-04-05 07:03:04.545416 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:03:04.545426 | orchestrator | Sunday 05 April 2026 07:02:51 +0000 (0:00:00.306) 0:00:13.829 ********** 2026-04-05 07:03:04.545436 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:03:04.545445 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:03:04.545455 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:03:04.545465 | orchestrator | 2026-04-05 07:03:04.545474 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:03:04.545484 | orchestrator | Sunday 05 April 2026 07:02:51 +0000 (0:00:00.532) 0:00:14.362 ********** 2026-04-05 07:03:04.545494 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545503 | orchestrator | 2026-04-05 07:03:04.545513 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:03:04.545523 | orchestrator | Sunday 05 April 2026 07:02:51 +0000 (0:00:00.138) 0:00:14.500 ********** 2026-04-05 07:03:04.545533 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545543 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:04.545552 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:04.545562 | orchestrator | 2026-04-05 07:03:04.545571 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:03:04.545581 | orchestrator | Sunday 05 April 2026 07:02:52 +0000 (0:00:00.309) 0:00:14.809 ********** 2026-04-05 07:03:04.545591 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:03:04.545600 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:03:04.545610 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:03:04.545620 | orchestrator | 2026-04-05 07:03:04.545629 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:03:04.545639 | orchestrator | Sunday 05 April 2026 07:02:52 +0000 (0:00:00.339) 0:00:15.149 ********** 2026-04-05 07:03:04.545648 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545658 | orchestrator | 2026-04-05 07:03:04.545668 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:03:04.545677 | orchestrator | Sunday 05 April 2026 07:02:52 +0000 (0:00:00.156) 0:00:15.306 ********** 2026-04-05 07:03:04.545687 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545696 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:04.545706 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:04.545738 | orchestrator | 2026-04-05 07:03:04.545748 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-05 07:03:04.545758 | orchestrator | Sunday 05 April 2026 07:02:53 +0000 (0:00:00.523) 0:00:15.830 ********** 2026-04-05 07:03:04.545767 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:03:04.545777 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:03:04.545786 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:03:04.545796 | orchestrator | 2026-04-05 07:03:04.545805 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-05 07:03:04.545815 | orchestrator | Sunday 05 April 2026 07:02:53 +0000 (0:00:00.336) 0:00:16.166 ********** 2026-04-05 07:03:04.545824 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545834 | orchestrator | 2026-04-05 07:03:04.545844 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-05 07:03:04.545853 | orchestrator | Sunday 05 April 2026 07:02:53 +0000 (0:00:00.142) 0:00:16.309 ********** 2026-04-05 07:03:04.545864 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.545874 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:04.545883 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:04.545893 | orchestrator | 2026-04-05 07:03:04.545903 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-05 07:03:04.545912 | orchestrator | Sunday 05 April 2026 07:02:53 +0000 (0:00:00.314) 0:00:16.624 ********** 2026-04-05 07:03:04.545922 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:03:04.545931 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:03:04.545941 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:03:04.545950 | orchestrator | 2026-04-05 07:03:04.545960 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-05 07:03:04.545970 | orchestrator | Sunday 05 April 2026 07:02:55 +0000 (0:00:01.829) 0:00:18.454 ********** 2026-04-05 07:03:04.545979 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 07:03:04.545990 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 07:03:04.546000 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-05 07:03:04.546009 | orchestrator | 2026-04-05 07:03:04.546082 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-05 07:03:04.546111 | orchestrator | Sunday 05 April 2026 07:02:57 +0000 (0:00:01.915) 0:00:20.369 ********** 2026-04-05 07:03:04.546122 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 07:03:04.546134 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 07:03:04.546143 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-05 07:03:04.546153 | orchestrator | 2026-04-05 07:03:04.546162 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-05 07:03:04.546172 | orchestrator | Sunday 05 April 2026 07:02:59 +0000 (0:00:01.960) 0:00:22.330 ********** 2026-04-05 07:03:04.546182 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 07:03:04.546197 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 07:03:04.546207 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-05 07:03:04.546216 | orchestrator | 2026-04-05 07:03:04.546226 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-05 07:03:04.546235 | orchestrator | Sunday 05 April 2026 07:03:01 +0000 (0:00:01.528) 0:00:23.859 ********** 2026-04-05 07:03:04.546245 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.546271 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:04.546281 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:04.546291 | orchestrator | 2026-04-05 07:03:04.546301 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-05 07:03:04.546319 | orchestrator | Sunday 05 April 2026 07:03:01 +0000 (0:00:00.321) 0:00:24.180 ********** 2026-04-05 07:03:04.546328 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:04.546338 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:04.546347 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:04.546357 | orchestrator | 2026-04-05 07:03:04.546367 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 07:03:04.546376 | orchestrator | Sunday 05 April 2026 07:03:01 +0000 (0:00:00.522) 0:00:24.703 ********** 2026-04-05 07:03:04.546386 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:03:04.546396 | orchestrator | 2026-04-05 07:03:04.546406 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-05 07:03:04.546415 | orchestrator | Sunday 05 April 2026 07:03:02 +0000 (0:00:01.004) 0:00:25.708 ********** 2026-04-05 07:03:04.546431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:03:04.546462 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:03:05.357603 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:03:05.357698 | orchestrator | 2026-04-05 07:03:05.357714 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-05 07:03:05.357725 | orchestrator | Sunday 05 April 2026 07:03:04 +0000 (0:00:01.743) 0:00:27.451 ********** 2026-04-05 07:03:05.357771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:03:05.357804 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:05.357822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:03:05.357841 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:05.357860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:03:08.317988 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:08.318149 | orchestrator | 2026-04-05 07:03:08.318167 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-05 07:03:08.318181 | orchestrator | Sunday 05 April 2026 07:03:05 +0000 (0:00:00.716) 0:00:28.168 ********** 2026-04-05 07:03:08.318215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:03:08.318305 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:08.318352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:03:08.318368 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:03:08.318386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:03:08.318414 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:03:08.318425 | orchestrator | 2026-04-05 07:03:08.318437 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-05 07:03:08.318448 | orchestrator | Sunday 05 April 2026 07:03:06 +0000 (0:00:01.281) 0:00:29.450 ********** 2026-04-05 07:03:08.318469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:03:09.451774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:03:09.451941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-05 07:03:09.451996 | orchestrator | 2026-04-05 07:03:09.452078 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-05 07:03:09.452097 | orchestrator | Sunday 05 April 2026 07:03:08 +0000 (0:00:01.828) 0:00:31.278 ********** 2026-04-05 07:03:09.452113 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:03:09.452128 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:03:09.452142 | orchestrator | } 2026-04-05 07:03:09.452156 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:03:09.452170 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:03:09.452184 | orchestrator | } 2026-04-05 07:03:09.452198 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:03:09.452211 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:03:09.452225 | orchestrator | } 2026-04-05 07:03:09.452241 | orchestrator | 2026-04-05 07:03:09.452279 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:03:09.452294 | orchestrator | Sunday 05 April 2026 07:03:08 +0000 (0:00:00.376) 0:00:31.655 ********** 2026-04-05 07:03:09.452310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:03:09.452328 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:03:09.452381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:04:15.534721 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:04:15.534806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-05 07:04:15.534833 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:04:15.534839 | orchestrator | 2026-04-05 07:04:15.534844 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 07:04:15.534850 | orchestrator | Sunday 05 April 2026 07:03:10 +0000 (0:00:01.470) 0:00:33.126 ********** 2026-04-05 07:04:15.534855 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:04:15.534860 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:04:15.534864 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:04:15.534869 | orchestrator | 2026-04-05 07:04:15.534874 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-05 07:04:15.534878 | orchestrator | Sunday 05 April 2026 07:03:10 +0000 (0:00:00.343) 0:00:33.469 ********** 2026-04-05 07:04:15.534883 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:04:15.534888 | orchestrator | 2026-04-05 07:04:15.534893 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-05 07:04:15.534908 | orchestrator | Sunday 05 April 2026 07:03:11 +0000 (0:00:00.929) 0:00:34.399 ********** 2026-04-05 07:04:15.534913 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:04:15.534917 | orchestrator | 2026-04-05 07:04:15.534922 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 07:04:15.534926 | orchestrator | Sunday 05 April 2026 07:03:45 +0000 (0:00:33.431) 0:01:07.831 ********** 2026-04-05 07:04:15.534931 | orchestrator | 2026-04-05 07:04:15.534935 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 07:04:15.534940 | orchestrator | Sunday 05 April 2026 07:03:45 +0000 (0:00:00.296) 0:01:08.127 ********** 2026-04-05 07:04:15.534945 | orchestrator | 2026-04-05 07:04:15.534949 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-05 07:04:15.534954 | orchestrator | Sunday 05 April 2026 07:03:45 +0000 (0:00:00.077) 0:01:08.204 ********** 2026-04-05 07:04:15.534958 | orchestrator | 2026-04-05 07:04:15.534963 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-05 07:04:15.534968 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-05 07:04:15.534972 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-05 07:04:15.534981 | orchestrator | Sunday 05 April 2026 07:03:45 +0000 (0:00:00.074) 0:01:08.279 ********** 2026-04-05 07:04:15.534986 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:04:15.534990 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:04:15.534995 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:04:15.535000 | orchestrator | 2026-04-05 07:04:15.535014 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:04:15.535020 | orchestrator | testbed-node-0 : ok=36  changed=6  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-05 07:04:15.535025 | orchestrator | testbed-node-1 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 07:04:15.535030 | orchestrator | testbed-node-2 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 07:04:15.535034 | orchestrator | 2026-04-05 07:04:15.535039 | orchestrator | 2026-04-05 07:04:15.535043 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:04:15.535052 | orchestrator | Sunday 05 April 2026 07:04:15 +0000 (0:00:29.572) 0:01:37.852 ********** 2026-04-05 07:04:15.535057 | orchestrator | =============================================================================== 2026-04-05 07:04:15.535062 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 33.43s 2026-04-05 07:04:15.535066 | orchestrator | horizon : Restart horizon container ------------------------------------ 29.57s 2026-04-05 07:04:15.535071 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.96s 2026-04-05 07:04:15.535075 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2026-04-05 07:04:15.535080 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.83s 2026-04-05 07:04:15.535084 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.83s 2026-04-05 07:04:15.535089 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.74s 2026-04-05 07:04:15.535093 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.70s 2026-04-05 07:04:15.535098 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.53s 2026-04-05 07:04:15.535103 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.47s 2026-04-05 07:04:15.535107 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.38s 2026-04-05 07:04:15.535112 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.28s 2026-04-05 07:04:15.535116 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.27s 2026-04-05 07:04:15.535121 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.00s 2026-04-05 07:04:15.535125 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2026-04-05 07:04:15.535130 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.93s 2026-04-05 07:04:15.535134 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-04-05 07:04:15.535139 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.72s 2026-04-05 07:04:15.535143 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-04-05 07:04:15.535148 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-04-05 07:04:15.720148 | orchestrator | + osism apply -a upgrade skyline 2026-04-05 07:04:17.012506 | orchestrator | 2026-04-05 07:04:17 | INFO  | Prepare task for execution of skyline. 2026-04-05 07:04:17.077056 | orchestrator | 2026-04-05 07:04:17 | INFO  | Task a4b083c1-f8e9-4132-af7e-ad08613801cf (skyline) was prepared for execution. 2026-04-05 07:04:17.077152 | orchestrator | 2026-04-05 07:04:17 | INFO  | It takes a moment until task a4b083c1-f8e9-4132-af7e-ad08613801cf (skyline) has been started and output is visible here. 2026-04-05 07:04:27.225624 | orchestrator | 2026-04-05 07:04:27.225816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:04:27.225856 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 07:04:27.225870 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 07:04:27.225892 | orchestrator | 2026-04-05 07:04:27.225903 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:04:27.225914 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 07:04:27.225925 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 07:04:27.225947 | orchestrator | Sunday 05 April 2026 07:04:22 +0000 (0:00:01.601) 0:00:01.601 ********** 2026-04-05 07:04:27.225958 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:04:27.225994 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:04:27.226005 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:04:27.226073 | orchestrator | 2026-04-05 07:04:27.226086 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:04:27.226097 | orchestrator | Sunday 05 April 2026 07:04:22 +0000 (0:00:00.655) 0:00:02.256 ********** 2026-04-05 07:04:27.226108 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-05 07:04:27.226119 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-05 07:04:27.226129 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-05 07:04:27.226140 | orchestrator | 2026-04-05 07:04:27.226151 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-05 07:04:27.226161 | orchestrator | 2026-04-05 07:04:27.226174 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-05 07:04:27.226187 | orchestrator | Sunday 05 April 2026 07:04:23 +0000 (0:00:00.714) 0:00:02.971 ********** 2026-04-05 07:04:27.226201 | orchestrator | included: /ansible/roles/skyline/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:04:27.226214 | orchestrator | 2026-04-05 07:04:27.226227 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-05 07:04:27.226239 | orchestrator | Sunday 05 April 2026 07:04:24 +0000 (0:00:01.283) 0:00:04.255 ********** 2026-04-05 07:04:27.226259 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:27.226279 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:27.226364 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:27.226391 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:27.226407 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:27.226422 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:27.226436 | orchestrator | 2026-04-05 07:04:27.226450 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-05 07:04:27.226463 | orchestrator | Sunday 05 April 2026 07:04:26 +0000 (0:00:02.014) 0:00:06.269 ********** 2026-04-05 07:04:27.226482 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:04:31.510012 | orchestrator | 2026-04-05 07:04:31.510189 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-05 07:04:31.510233 | orchestrator | Sunday 05 April 2026 07:04:27 +0000 (0:00:01.168) 0:00:07.437 ********** 2026-04-05 07:04:31.510258 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:31.510284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:31.510382 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:31.510436 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:31.510484 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:31.510504 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:31.510523 | orchestrator | 2026-04-05 07:04:31.510540 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-05 07:04:31.510558 | orchestrator | Sunday 05 April 2026 07:04:31 +0000 (0:00:03.294) 0:00:10.732 ********** 2026-04-05 07:04:31.510577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:04:31.510613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:04:32.278548 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:04:32.278685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:04:32.278705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:04:32.278718 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:04:32.278729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:04:32.278807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:04:32.278820 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:04:32.278831 | orchestrator | 2026-04-05 07:04:32.278843 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-05 07:04:32.278854 | orchestrator | Sunday 05 April 2026 07:04:31 +0000 (0:00:00.678) 0:00:11.410 ********** 2026-04-05 07:04:32.278865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:04:32.278876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:04:32.278887 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:04:32.278897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:04:32.278929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:04:35.392895 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:04:35.393029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:04:35.393050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:04:35.393096 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:04:35.393109 | orchestrator | 2026-04-05 07:04:35.393121 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-05 07:04:35.393134 | orchestrator | Sunday 05 April 2026 07:04:32 +0000 (0:00:01.045) 0:00:12.456 ********** 2026-04-05 07:04:35.393165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:35.393202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:35.393217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:35.393230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:35.393256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:35.393277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:43.655965 | orchestrator | 2026-04-05 07:04:43.656083 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-05 07:04:43.656100 | orchestrator | Sunday 05 April 2026 07:04:35 +0000 (0:00:02.619) 0:00:15.075 ********** 2026-04-05 07:04:43.656112 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-05 07:04:43.656124 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-05 07:04:43.656134 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-05 07:04:43.656145 | orchestrator | 2026-04-05 07:04:43.656156 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-05 07:04:43.656167 | orchestrator | Sunday 05 April 2026 07:04:37 +0000 (0:00:01.580) 0:00:16.656 ********** 2026-04-05 07:04:43.656178 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-05 07:04:43.656189 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-05 07:04:43.656201 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-05 07:04:43.656211 | orchestrator | 2026-04-05 07:04:43.656222 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-05 07:04:43.656260 | orchestrator | Sunday 05 April 2026 07:04:39 +0000 (0:00:01.943) 0:00:18.600 ********** 2026-04-05 07:04:43.656276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:43.656349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:43.656389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:43.656402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:43.656424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:43.656442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:43.656455 | orchestrator | 2026-04-05 07:04:43.656468 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-05 07:04:43.656482 | orchestrator | Sunday 05 April 2026 07:04:41 +0000 (0:00:02.625) 0:00:21.226 ********** 2026-04-05 07:04:43.656495 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:04:43.656510 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:04:43.656523 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:04:43.656536 | orchestrator | 2026-04-05 07:04:43.656549 | orchestrator | TASK [service-check-containers : skyline | Check containers] ******************* 2026-04-05 07:04:43.656562 | orchestrator | Sunday 05 April 2026 07:04:42 +0000 (0:00:00.702) 0:00:21.928 ********** 2026-04-05 07:04:43.656585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:45.776993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:45.777098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-05 07:04:45.777132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:45.777165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:45.777200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-05 07:04:45.777213 | orchestrator | 2026-04-05 07:04:45.777227 | orchestrator | TASK [service-check-containers : skyline | Notify handlers to restart containers] *** 2026-04-05 07:04:45.777239 | orchestrator | Sunday 05 April 2026 07:04:44 +0000 (0:00:02.367) 0:00:24.296 ********** 2026-04-05 07:04:45.777251 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:04:45.777264 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:04:45.777275 | orchestrator | } 2026-04-05 07:04:45.777286 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:04:45.777297 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:04:45.777360 | orchestrator | } 2026-04-05 07:04:45.777372 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:04:45.777382 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:04:45.777393 | orchestrator | } 2026-04-05 07:04:45.777404 | orchestrator | 2026-04-05 07:04:45.777415 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:04:45.777426 | orchestrator | Sunday 05 April 2026 07:04:45 +0000 (0:00:00.554) 0:00:24.850 ********** 2026-04-05 07:04:45.777444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:04:45.777458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:04:45.777478 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:04:45.777501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:05:17.562443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:05:17.562528 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:05:17.562549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-05 07:05:17.562555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-05 07:05:17.562575 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:05:17.562581 | orchestrator | 2026-04-05 07:05:17.562586 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-05 07:05:17.562593 | orchestrator | Sunday 05 April 2026 07:04:46 +0000 (0:00:01.284) 0:00:26.135 ********** 2026-04-05 07:05:17.562598 | orchestrator | 2026-04-05 07:05:17.562603 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-05 07:05:17.562608 | orchestrator | Sunday 05 April 2026 07:04:46 +0000 (0:00:00.082) 0:00:26.217 ********** 2026-04-05 07:05:17.562613 | orchestrator | 2026-04-05 07:05:17.562618 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-05 07:05:17.562622 | orchestrator | Sunday 05 April 2026 07:04:46 +0000 (0:00:00.083) 0:00:26.301 ********** 2026-04-05 07:05:17.562627 | orchestrator | 2026-04-05 07:05:17.562632 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-05 07:05:17.562637 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-05 07:05:17.562642 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-05 07:05:17.562662 | orchestrator | Sunday 05 April 2026 07:04:46 +0000 (0:00:00.073) 0:00:26.375 ********** 2026-04-05 07:05:17.562667 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:05:17.562672 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:05:17.562677 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:05:17.562682 | orchestrator | 2026-04-05 07:05:17.562687 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-05 07:05:17.562691 | orchestrator | Sunday 05 April 2026 07:05:00 +0000 (0:00:13.586) 0:00:39.961 ********** 2026-04-05 07:05:17.562696 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:05:17.562701 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:05:17.562706 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:05:17.562711 | orchestrator | 2026-04-05 07:05:17.562715 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:05:17.562721 | orchestrator | testbed-node-0 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 07:05:17.562727 | orchestrator | testbed-node-1 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 07:05:17.562732 | orchestrator | testbed-node-2 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 07:05:17.562737 | orchestrator | 2026-04-05 07:05:17.562742 | orchestrator | 2026-04-05 07:05:17.562747 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:05:17.562752 | orchestrator | Sunday 05 April 2026 07:05:17 +0000 (0:00:16.835) 0:00:56.797 ********** 2026-04-05 07:05:17.562757 | orchestrator | =============================================================================== 2026-04-05 07:05:17.562761 | orchestrator | skyline : Restart skyline-console container ---------------------------- 16.84s 2026-04-05 07:05:17.562774 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 13.59s 2026-04-05 07:05:17.562779 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 3.29s 2026-04-05 07:05:17.562784 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.63s 2026-04-05 07:05:17.562789 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.62s 2026-04-05 07:05:17.562794 | orchestrator | service-check-containers : skyline | Check containers ------------------- 2.37s 2026-04-05 07:05:17.562798 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 2.01s 2026-04-05 07:05:17.562803 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 1.94s 2026-04-05 07:05:17.562808 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.58s 2026-04-05 07:05:17.562813 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.28s 2026-04-05 07:05:17.562817 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.28s 2026-04-05 07:05:17.562822 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.17s 2026-04-05 07:05:17.562827 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.05s 2026-04-05 07:05:17.562832 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-04-05 07:05:17.562837 | orchestrator | skyline : Copying over custom logos ------------------------------------- 0.70s 2026-04-05 07:05:17.562841 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS certificate --- 0.68s 2026-04-05 07:05:17.562846 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-04-05 07:05:17.562851 | orchestrator | service-check-containers : skyline | Notify handlers to restart containers --- 0.55s 2026-04-05 07:05:17.562856 | orchestrator | skyline : Flush handlers ------------------------------------------------ 0.24s 2026-04-05 07:05:17.774284 | orchestrator | + osism apply -a upgrade glance 2026-04-05 07:05:19.128773 | orchestrator | 2026-04-05 07:05:19 | INFO  | Prepare task for execution of glance. 2026-04-05 07:05:19.195276 | orchestrator | 2026-04-05 07:05:19 | INFO  | Task d0d9e179-a3f8-4120-aeea-33df8f86e8f4 (glance) was prepared for execution. 2026-04-05 07:05:19.195429 | orchestrator | 2026-04-05 07:05:19 | INFO  | It takes a moment until task d0d9e179-a3f8-4120-aeea-33df8f86e8f4 (glance) has been started and output is visible here. 2026-04-05 07:06:04.430735 | orchestrator | 2026-04-05 07:06:04.430854 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:06:04.430870 | orchestrator | 2026-04-05 07:06:04.430882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:06:04.430893 | orchestrator | Sunday 05 April 2026 07:05:24 +0000 (0:00:01.526) 0:00:01.526 ********** 2026-04-05 07:06:04.430905 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:06:04.430917 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:06:04.430927 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:06:04.430938 | orchestrator | 2026-04-05 07:06:04.430949 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:06:04.430960 | orchestrator | Sunday 05 April 2026 07:05:25 +0000 (0:00:01.780) 0:00:03.307 ********** 2026-04-05 07:06:04.430971 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-05 07:06:04.430982 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-05 07:06:04.430993 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-05 07:06:04.431003 | orchestrator | 2026-04-05 07:06:04.431014 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-05 07:06:04.431025 | orchestrator | 2026-04-05 07:06:04.431036 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:06:04.431047 | orchestrator | Sunday 05 April 2026 07:05:27 +0000 (0:00:01.955) 0:00:05.262 ********** 2026-04-05 07:06:04.431058 | orchestrator | included: /ansible/roles/glance/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:06:04.431093 | orchestrator | 2026-04-05 07:06:04.431105 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:06:04.431115 | orchestrator | Sunday 05 April 2026 07:05:30 +0000 (0:00:02.650) 0:00:07.913 ********** 2026-04-05 07:06:04.431126 | orchestrator | included: /ansible/roles/glance/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:06:04.431137 | orchestrator | 2026-04-05 07:06:04.431148 | orchestrator | TASK [glance : Start Glance upgrade] ******************************************* 2026-04-05 07:06:04.431159 | orchestrator | Sunday 05 April 2026 07:05:32 +0000 (0:00:02.232) 0:00:10.146 ********** 2026-04-05 07:06:04.431169 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:06:04.431180 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:06:04.431190 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:06:04.431202 | orchestrator | 2026-04-05 07:06:04.431213 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:06:04.431223 | orchestrator | Sunday 05 April 2026 07:05:34 +0000 (0:00:01.503) 0:00:11.649 ********** 2026-04-05 07:06:04.431234 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:06:04.431246 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:06:04.431257 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-0 2026-04-05 07:06:04.431275 | orchestrator | 2026-04-05 07:06:04.431295 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-05 07:06:04.431320 | orchestrator | Sunday 05 April 2026 07:05:36 +0000 (0:00:01.968) 0:00:13.618 ********** 2026-04-05 07:06:04.431418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:06:04.431447 | orchestrator | 2026-04-05 07:06:04.431467 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:06:04.431486 | orchestrator | Sunday 05 April 2026 07:05:40 +0000 (0:00:04.719) 0:00:18.337 ********** 2026-04-05 07:06:04.431505 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0 2026-04-05 07:06:04.431524 | orchestrator | 2026-04-05 07:06:04.431566 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-05 07:06:04.431587 | orchestrator | Sunday 05 April 2026 07:05:42 +0000 (0:00:01.459) 0:00:19.797 ********** 2026-04-05 07:06:04.431606 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:06:04.431640 | orchestrator | 2026-04-05 07:06:04.431658 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-05 07:06:04.431674 | orchestrator | Sunday 05 April 2026 07:05:47 +0000 (0:00:04.664) 0:00:24.461 ********** 2026-04-05 07:06:04.431691 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 07:06:04.431710 | orchestrator | 2026-04-05 07:06:04.431727 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-05 07:06:04.431743 | orchestrator | Sunday 05 April 2026 07:05:49 +0000 (0:00:02.513) 0:00:26.975 ********** 2026-04-05 07:06:04.431760 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 07:06:04.431777 | orchestrator | 2026-04-05 07:06:04.431794 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-05 07:06:04.431812 | orchestrator | Sunday 05 April 2026 07:05:51 +0000 (0:00:01.954) 0:00:28.929 ********** 2026-04-05 07:06:04.431831 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:06:04.431849 | orchestrator | 2026-04-05 07:06:04.431868 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-05 07:06:04.431887 | orchestrator | Sunday 05 April 2026 07:05:52 +0000 (0:00:01.464) 0:00:30.394 ********** 2026-04-05 07:06:04.431906 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:06:04.431926 | orchestrator | 2026-04-05 07:06:04.431945 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-05 07:06:04.431963 | orchestrator | Sunday 05 April 2026 07:05:54 +0000 (0:00:01.126) 0:00:31.521 ********** 2026-04-05 07:06:04.431982 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:06:04.432001 | orchestrator | 2026-04-05 07:06:04.432019 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:06:04.432034 | orchestrator | Sunday 05 April 2026 07:05:55 +0000 (0:00:01.170) 0:00:32.692 ********** 2026-04-05 07:06:04.432047 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0 2026-04-05 07:06:04.432064 | orchestrator | 2026-04-05 07:06:04.432075 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-05 07:06:04.432086 | orchestrator | Sunday 05 April 2026 07:05:56 +0000 (0:00:01.465) 0:00:34.157 ********** 2026-04-05 07:06:04.432108 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:06:04.432130 | orchestrator | 2026-04-05 07:06:04.432142 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-05 07:06:04.432152 | orchestrator | Sunday 05 April 2026 07:06:01 +0000 (0:00:04.762) 0:00:38.920 ********** 2026-04-05 07:06:04.432179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:07:58.232019 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232138 | orchestrator | 2026-04-05 07:07:58.232154 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-05 07:07:58.232164 | orchestrator | Sunday 05 April 2026 07:06:05 +0000 (0:00:04.079) 0:00:42.999 ********** 2026-04-05 07:07:58.232191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:07:58.232245 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232260 | orchestrator | 2026-04-05 07:07:58.232272 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-05 07:07:58.232284 | orchestrator | Sunday 05 April 2026 07:06:09 +0000 (0:00:04.030) 0:00:47.030 ********** 2026-04-05 07:07:58.232296 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232308 | orchestrator | 2026-04-05 07:07:58.232321 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-05 07:07:58.232334 | orchestrator | Sunday 05 April 2026 07:06:13 +0000 (0:00:04.342) 0:00:51.372 ********** 2026-04-05 07:07:58.232366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:07:58.232447 | orchestrator | 2026-04-05 07:07:58.232455 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-05 07:07:58.232463 | orchestrator | Sunday 05 April 2026 07:06:19 +0000 (0:00:05.111) 0:00:56.484 ********** 2026-04-05 07:07:58.232470 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:07:58.232477 | orchestrator | 2026-04-05 07:07:58.232484 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-05 07:07:58.232491 | orchestrator | Sunday 05 April 2026 07:06:25 +0000 (0:00:06.651) 0:01:03.136 ********** 2026-04-05 07:07:58.232498 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232506 | orchestrator | 2026-04-05 07:07:58.232513 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-05 07:07:58.232520 | orchestrator | Sunday 05 April 2026 07:06:29 +0000 (0:00:04.150) 0:01:07.287 ********** 2026-04-05 07:07:58.232527 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232534 | orchestrator | 2026-04-05 07:07:58.232541 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-05 07:07:58.232548 | orchestrator | Sunday 05 April 2026 07:06:34 +0000 (0:00:04.163) 0:01:11.450 ********** 2026-04-05 07:07:58.232556 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232563 | orchestrator | 2026-04-05 07:07:58.232578 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-05 07:07:58.232587 | orchestrator | Sunday 05 April 2026 07:06:38 +0000 (0:00:04.146) 0:01:15.597 ********** 2026-04-05 07:07:58.232604 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232613 | orchestrator | 2026-04-05 07:07:58.232622 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-05 07:07:58.232630 | orchestrator | Sunday 05 April 2026 07:06:39 +0000 (0:00:01.143) 0:01:16.740 ********** 2026-04-05 07:07:58.232639 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 07:07:58.232649 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232657 | orchestrator | 2026-04-05 07:07:58.232665 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-05 07:07:58.232674 | orchestrator | Sunday 05 April 2026 07:06:43 +0000 (0:00:04.365) 0:01:21.105 ********** 2026-04-05 07:07:58.232682 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232690 | orchestrator | 2026-04-05 07:07:58.232698 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-05 07:07:58.232707 | orchestrator | Sunday 05 April 2026 07:06:47 +0000 (0:00:04.275) 0:01:25.380 ********** 2026-04-05 07:07:58.232716 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232725 | orchestrator | 2026-04-05 07:07:58.232733 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:07:58.232741 | orchestrator | Sunday 05 April 2026 07:06:52 +0000 (0:00:04.479) 0:01:29.860 ********** 2026-04-05 07:07:58.232750 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:07:58.232758 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:07:58.232767 | orchestrator | included: /ansible/roles/glance/tasks/stop_service.yml for testbed-node-0 2026-04-05 07:07:58.232776 | orchestrator | 2026-04-05 07:07:58.232785 | orchestrator | TASK [glance : Stop glance service] ******************************************** 2026-04-05 07:07:58.232793 | orchestrator | Sunday 05 April 2026 07:06:54 +0000 (0:00:01.863) 0:01:31.723 ********** 2026-04-05 07:07:58.232801 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:07:58.232810 | orchestrator | 2026-04-05 07:07:58.232818 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-05 07:07:58.232827 | orchestrator | Sunday 05 April 2026 07:07:07 +0000 (0:00:13.177) 0:01:44.901 ********** 2026-04-05 07:07:58.232836 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:07:58.232844 | orchestrator | 2026-04-05 07:07:58.232853 | orchestrator | TASK [glance : Running Glance database expand container] *********************** 2026-04-05 07:07:58.232861 | orchestrator | Sunday 05 April 2026 07:07:10 +0000 (0:00:03.247) 0:01:48.148 ********** 2026-04-05 07:07:58.232870 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:07:58.232878 | orchestrator | 2026-04-05 07:07:58.232887 | orchestrator | TASK [glance : Running Glance database migrate container] ********************** 2026-04-05 07:07:58.232896 | orchestrator | Sunday 05 April 2026 07:07:37 +0000 (0:00:26.492) 0:02:14.640 ********** 2026-04-05 07:07:58.232904 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:07:58.232913 | orchestrator | 2026-04-05 07:07:58.232922 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:07:58.232931 | orchestrator | Sunday 05 April 2026 07:07:53 +0000 (0:00:15.896) 0:02:30.537 ********** 2026-04-05 07:07:58.232940 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:07:58.232949 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-1, testbed-node-2 2026-04-05 07:07:58.232957 | orchestrator | 2026-04-05 07:07:58.232964 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-05 07:07:58.232972 | orchestrator | Sunday 05 April 2026 07:07:54 +0000 (0:00:01.423) 0:02:31.960 ********** 2026-04-05 07:07:58.232992 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:08:23.647169 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:08:23.647282 | orchestrator | 2026-04-05 07:08:23.647299 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:08:23.647312 | orchestrator | Sunday 05 April 2026 07:07:59 +0000 (0:00:04.966) 0:02:36.927 ********** 2026-04-05 07:08:23.647324 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-1, testbed-node-2 2026-04-05 07:08:23.647335 | orchestrator | 2026-04-05 07:08:23.647347 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-05 07:08:23.647357 | orchestrator | Sunday 05 April 2026 07:08:00 +0000 (0:00:01.230) 0:02:38.158 ********** 2026-04-05 07:08:23.647368 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:08:23.647434 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:08:23.647446 | orchestrator | 2026-04-05 07:08:23.647457 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-05 07:08:23.647478 | orchestrator | Sunday 05 April 2026 07:08:05 +0000 (0:00:04.843) 0:02:43.001 ********** 2026-04-05 07:08:23.647490 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 07:08:23.647529 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 07:08:23.647540 | orchestrator | 2026-04-05 07:08:23.647551 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-05 07:08:23.647562 | orchestrator | Sunday 05 April 2026 07:08:07 +0000 (0:00:02.361) 0:02:45.363 ********** 2026-04-05 07:08:23.647573 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 07:08:23.647584 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-05 07:08:23.647595 | orchestrator | 2026-04-05 07:08:23.647606 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-05 07:08:23.647617 | orchestrator | Sunday 05 April 2026 07:08:10 +0000 (0:00:02.074) 0:02:47.438 ********** 2026-04-05 07:08:23.647627 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:08:23.647638 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:08:23.647649 | orchestrator | 2026-04-05 07:08:23.647660 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-05 07:08:23.647671 | orchestrator | Sunday 05 April 2026 07:08:11 +0000 (0:00:01.844) 0:02:49.282 ********** 2026-04-05 07:08:23.647682 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:08:23.647694 | orchestrator | 2026-04-05 07:08:23.647707 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-05 07:08:23.647719 | orchestrator | Sunday 05 April 2026 07:08:12 +0000 (0:00:01.125) 0:02:50.407 ********** 2026-04-05 07:08:23.647732 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:08:23.647746 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:08:23.647759 | orchestrator | 2026-04-05 07:08:23.647771 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:08:23.647785 | orchestrator | Sunday 05 April 2026 07:08:14 +0000 (0:00:01.211) 0:02:51.618 ********** 2026-04-05 07:08:23.647812 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-1, testbed-node-2 2026-04-05 07:08:23.647827 | orchestrator | 2026-04-05 07:08:23.647858 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-05 07:08:23.647873 | orchestrator | Sunday 05 April 2026 07:08:15 +0000 (0:00:01.328) 0:02:52.947 ********** 2026-04-05 07:08:23.647889 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:08:23.647914 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:08:23.647928 | orchestrator | 2026-04-05 07:08:23.647942 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-05 07:08:23.647956 | orchestrator | Sunday 05 April 2026 07:08:20 +0000 (0:00:05.115) 0:02:58.062 ********** 2026-04-05 07:08:23.647985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:08:37.386291 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:08:37.386459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:08:37.386483 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:08:37.386496 | orchestrator | 2026-04-05 07:08:37.386508 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-05 07:08:37.386521 | orchestrator | Sunday 05 April 2026 07:08:24 +0000 (0:00:04.286) 0:03:02.349 ********** 2026-04-05 07:08:37.386550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:08:37.386563 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:08:37.386595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:08:37.386632 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:08:37.386644 | orchestrator | 2026-04-05 07:08:37.386655 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-05 07:08:37.386666 | orchestrator | Sunday 05 April 2026 07:08:28 +0000 (0:00:04.031) 0:03:06.380 ********** 2026-04-05 07:08:37.386677 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:08:37.386688 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:08:37.386699 | orchestrator | 2026-04-05 07:08:37.386710 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-05 07:08:37.386721 | orchestrator | Sunday 05 April 2026 07:08:33 +0000 (0:00:04.427) 0:03:10.808 ********** 2026-04-05 07:08:37.386738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:08:37.386770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:09:23.311065 | orchestrator | 2026-04-05 07:09:23.311201 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-05 07:09:23.311220 | orchestrator | Sunday 05 April 2026 07:08:38 +0000 (0:00:05.093) 0:03:15.902 ********** 2026-04-05 07:09:23.311233 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:09:23.311245 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:09:23.311258 | orchestrator | 2026-04-05 07:09:23.311332 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-05 07:09:23.311347 | orchestrator | Sunday 05 April 2026 07:08:45 +0000 (0:00:06.816) 0:03:22.718 ********** 2026-04-05 07:09:23.311359 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:09:23.311370 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:09:23.311381 | orchestrator | 2026-04-05 07:09:23.311418 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-05 07:09:23.311431 | orchestrator | Sunday 05 April 2026 07:08:49 +0000 (0:00:04.269) 0:03:26.988 ********** 2026-04-05 07:09:23.311442 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:09:23.311453 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:09:23.311464 | orchestrator | 2026-04-05 07:09:23.311475 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-05 07:09:23.311487 | orchestrator | Sunday 05 April 2026 07:08:54 +0000 (0:00:04.570) 0:03:31.559 ********** 2026-04-05 07:09:23.311498 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:09:23.311510 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:09:23.311521 | orchestrator | 2026-04-05 07:09:23.311532 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-05 07:09:23.311543 | orchestrator | Sunday 05 April 2026 07:08:58 +0000 (0:00:04.421) 0:03:35.980 ********** 2026-04-05 07:09:23.311554 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:09:23.311565 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:09:23.311576 | orchestrator | 2026-04-05 07:09:23.311588 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-05 07:09:23.311599 | orchestrator | Sunday 05 April 2026 07:08:59 +0000 (0:00:01.287) 0:03:37.267 ********** 2026-04-05 07:09:23.311610 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 07:09:23.311646 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:09:23.311658 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-05 07:09:23.311670 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:09:23.311681 | orchestrator | 2026-04-05 07:09:23.311692 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-05 07:09:23.311704 | orchestrator | Sunday 05 April 2026 07:09:04 +0000 (0:00:04.600) 0:03:41.868 ********** 2026-04-05 07:09:23.311715 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:09:23.311726 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:09:23.311737 | orchestrator | 2026-04-05 07:09:23.311748 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-05 07:09:23.311759 | orchestrator | Sunday 05 April 2026 07:09:08 +0000 (0:00:04.534) 0:03:46.402 ********** 2026-04-05 07:09:23.311770 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:09:23.311781 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:09:23.311792 | orchestrator | 2026-04-05 07:09:23.311802 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-05 07:09:23.311813 | orchestrator | Sunday 05 April 2026 07:09:13 +0000 (0:00:04.743) 0:03:51.145 ********** 2026-04-05 07:09:23.311829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:09:23.311922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:09:23.311950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-05 07:09:23.311964 | orchestrator | 2026-04-05 07:09:23.311975 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-05 07:09:23.311987 | orchestrator | Sunday 05 April 2026 07:09:18 +0000 (0:00:05.156) 0:03:56.302 ********** 2026-04-05 07:09:23.311998 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:09:23.312009 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:09:23.312020 | orchestrator | } 2026-04-05 07:09:23.312032 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:09:23.312043 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:09:23.312082 | orchestrator | } 2026-04-05 07:09:23.312095 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:09:23.312105 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:09:23.312117 | orchestrator | } 2026-04-05 07:09:23.312128 | orchestrator | 2026-04-05 07:09:23.312139 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:09:23.312150 | orchestrator | Sunday 05 April 2026 07:09:20 +0000 (0:00:01.385) 0:03:57.688 ********** 2026-04-05 07:09:23.312179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:10:29.957718 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:10:29.957836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:10:29.957857 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:10:29.957870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-05 07:10:29.957920 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:10:29.957933 | orchestrator | 2026-04-05 07:10:29.957945 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 07:10:29.957957 | orchestrator | Sunday 05 April 2026 07:09:24 +0000 (0:00:04.520) 0:04:02.208 ********** 2026-04-05 07:10:29.957968 | orchestrator | 2026-04-05 07:10:29.957979 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 07:10:29.957989 | orchestrator | Sunday 05 April 2026 07:09:25 +0000 (0:00:00.431) 0:04:02.639 ********** 2026-04-05 07:10:29.958000 | orchestrator | 2026-04-05 07:10:29.958011 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-05 07:10:29.958094 | orchestrator | Sunday 05 April 2026 07:09:25 +0000 (0:00:00.421) 0:04:03.061 ********** 2026-04-05 07:10:29.958107 | orchestrator | 2026-04-05 07:10:29.958118 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-05 07:10:29.958128 | orchestrator | Sunday 05 April 2026 07:09:26 +0000 (0:00:00.799) 0:04:03.861 ********** 2026-04-05 07:10:29.958139 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:10:29.958150 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:10:29.958161 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:10:29.958171 | orchestrator | 2026-04-05 07:10:29.958182 | orchestrator | TASK [glance : Running Glance database contract container] ********************* 2026-04-05 07:10:29.958193 | orchestrator | Sunday 05 April 2026 07:10:07 +0000 (0:00:41.133) 0:04:44.995 ********** 2026-04-05 07:10:29.958203 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:10:29.958214 | orchestrator | 2026-04-05 07:10:29.958225 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-05 07:10:29.958236 | orchestrator | Sunday 05 April 2026 07:10:23 +0000 (0:00:15.747) 0:05:00.743 ********** 2026-04-05 07:10:29.958247 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:10:29.958259 | orchestrator | 2026-04-05 07:10:29.958272 | orchestrator | TASK [glance : Finish Glance upgrade] ****************************************** 2026-04-05 07:10:29.958285 | orchestrator | Sunday 05 April 2026 07:10:26 +0000 (0:00:03.106) 0:05:03.850 ********** 2026-04-05 07:10:29.958298 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:10:29.958310 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:10:29.958323 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:10:29.958335 | orchestrator | 2026-04-05 07:10:29.958347 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-05 07:10:29.958360 | orchestrator | Sunday 05 April 2026 07:10:27 +0000 (0:00:01.360) 0:05:05.210 ********** 2026-04-05 07:10:29.958372 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:10:29.958385 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:10:29.958397 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:10:29.958434 | orchestrator | 2026-04-05 07:10:29.958448 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:10:29.958462 | orchestrator | testbed-node-0 : ok=27  changed=11  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-05 07:10:29.958486 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 07:10:29.958499 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-05 07:10:29.958512 | orchestrator | 2026-04-05 07:10:29.958526 | orchestrator | 2026-04-05 07:10:29.958539 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:10:29.958551 | orchestrator | Sunday 05 April 2026 07:10:29 +0000 (0:00:01.758) 0:05:06.969 ********** 2026-04-05 07:10:29.958564 | orchestrator | =============================================================================== 2026-04-05 07:10:29.958577 | orchestrator | glance : Restart glance-api container ---------------------------------- 41.13s 2026-04-05 07:10:29.958589 | orchestrator | glance : Running Glance database expand container ---------------------- 26.49s 2026-04-05 07:10:29.958603 | orchestrator | glance : Running Glance database migrate container --------------------- 15.90s 2026-04-05 07:10:29.958616 | orchestrator | glance : Running Glance database contract container -------------------- 15.75s 2026-04-05 07:10:29.958628 | orchestrator | glance : Stop glance service ------------------------------------------- 13.18s 2026-04-05 07:10:29.958638 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.82s 2026-04-05 07:10:29.958649 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.65s 2026-04-05 07:10:29.958660 | orchestrator | service-check-containers : glance | Check containers -------------------- 5.16s 2026-04-05 07:10:29.958670 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.12s 2026-04-05 07:10:29.958681 | orchestrator | glance : Copying over config.json files for services -------------------- 5.11s 2026-04-05 07:10:29.958691 | orchestrator | glance : Copying over config.json files for services -------------------- 5.09s 2026-04-05 07:10:29.958702 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.97s 2026-04-05 07:10:29.958713 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.84s 2026-04-05 07:10:29.958723 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.76s 2026-04-05 07:10:29.958734 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.74s 2026-04-05 07:10:29.958744 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.72s 2026-04-05 07:10:29.958755 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.66s 2026-04-05 07:10:29.958765 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.60s 2026-04-05 07:10:29.958776 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.57s 2026-04-05 07:10:29.958792 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.54s 2026-04-05 07:10:30.186915 | orchestrator | + osism apply -a upgrade cinder 2026-04-05 07:10:31.523903 | orchestrator | 2026-04-05 07:10:31 | INFO  | Prepare task for execution of cinder. 2026-04-05 07:10:31.605904 | orchestrator | 2026-04-05 07:10:31 | INFO  | Task 8c1bbe97-0e6e-45a4-b8a2-125a28db6fdc (cinder) was prepared for execution. 2026-04-05 07:10:31.606002 | orchestrator | 2026-04-05 07:10:31 | INFO  | It takes a moment until task 8c1bbe97-0e6e-45a4-b8a2-125a28db6fdc (cinder) has been started and output is visible here. 2026-04-05 07:10:54.668966 | orchestrator | 2026-04-05 07:10:54.669061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:10:54.669072 | orchestrator | 2026-04-05 07:10:54.669080 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:10:54.669088 | orchestrator | Sunday 05 April 2026 07:10:36 +0000 (0:00:01.654) 0:00:01.654 ********** 2026-04-05 07:10:54.669094 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:10:54.669102 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:10:54.669108 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:10:54.669136 | orchestrator | 2026-04-05 07:10:54.669144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:10:54.669152 | orchestrator | Sunday 05 April 2026 07:10:38 +0000 (0:00:01.722) 0:00:03.377 ********** 2026-04-05 07:10:54.669159 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-05 07:10:54.669166 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-05 07:10:54.669174 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-05 07:10:54.669180 | orchestrator | 2026-04-05 07:10:54.669186 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-05 07:10:54.669193 | orchestrator | 2026-04-05 07:10:54.669200 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:10:54.669208 | orchestrator | Sunday 05 April 2026 07:10:40 +0000 (0:00:02.193) 0:00:05.570 ********** 2026-04-05 07:10:54.669215 | orchestrator | included: /ansible/roles/cinder/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:10:54.669223 | orchestrator | 2026-04-05 07:10:54.669230 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:10:54.669237 | orchestrator | Sunday 05 April 2026 07:10:43 +0000 (0:00:03.068) 0:00:08.639 ********** 2026-04-05 07:10:54.669244 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:10:54.669252 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:10:54.669259 | orchestrator | included: /ansible/roles/cinder/tasks/config.yml for testbed-node-0 2026-04-05 07:10:54.669266 | orchestrator | 2026-04-05 07:10:54.669273 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-05 07:10:54.669280 | orchestrator | Sunday 05 April 2026 07:10:45 +0000 (0:00:02.072) 0:00:10.712 ********** 2026-04-05 07:10:54.669291 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:10:54.669302 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:10:54.669324 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:10:54.669353 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:10:54.669360 | orchestrator | 2026-04-05 07:10:54.669366 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:10:54.669373 | orchestrator | Sunday 05 April 2026 07:10:48 +0000 (0:00:03.327) 0:00:14.040 ********** 2026-04-05 07:10:54.669379 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:10:54.669385 | orchestrator | 2026-04-05 07:10:54.669392 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:10:54.669399 | orchestrator | Sunday 05 April 2026 07:10:50 +0000 (0:00:01.141) 0:00:15.182 ********** 2026-04-05 07:10:54.669406 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0 2026-04-05 07:10:54.669413 | orchestrator | 2026-04-05 07:10:54.669466 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-05 07:10:54.669474 | orchestrator | Sunday 05 April 2026 07:10:51 +0000 (0:00:01.487) 0:00:16.669 ********** 2026-04-05 07:10:54.669481 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-05 07:10:54.669488 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-05 07:10:54.669495 | orchestrator | 2026-04-05 07:10:54.669501 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-05 07:10:54.669507 | orchestrator | Sunday 05 April 2026 07:10:54 +0000 (0:00:02.699) 0:00:19.369 ********** 2026-04-05 07:10:54.669514 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:10:54.669526 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:10:54.669549 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:11:14.631839 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:11:14.631976 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:11:14.632005 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:11:14.632046 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:11:14.632119 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:11:14.632142 | orchestrator | 2026-04-05 07:11:14.632162 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-05 07:11:14.632182 | orchestrator | Sunday 05 April 2026 07:11:00 +0000 (0:00:06.202) 0:00:25.572 ********** 2026-04-05 07:11:14.632201 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:11:14.632219 | orchestrator | 2026-04-05 07:11:14.632237 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-05 07:11:14.632255 | orchestrator | Sunday 05 April 2026 07:11:02 +0000 (0:00:02.301) 0:00:27.873 ********** 2026-04-05 07:11:14.632273 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:11:14.632292 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-05 07:11:14.632311 | orchestrator | 2026-04-05 07:11:14.632329 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-05 07:11:14.632348 | orchestrator | Sunday 05 April 2026 07:11:06 +0000 (0:00:03.421) 0:00:31.295 ********** 2026-04-05 07:11:14.632367 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-05 07:11:14.632386 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-05 07:11:14.632405 | orchestrator | 2026-04-05 07:11:14.632478 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-05 07:11:14.632500 | orchestrator | Sunday 05 April 2026 07:11:07 +0000 (0:00:01.799) 0:00:33.095 ********** 2026-04-05 07:11:14.632519 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:11:14.632539 | orchestrator | 2026-04-05 07:11:14.632557 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-05 07:11:14.632575 | orchestrator | Sunday 05 April 2026 07:11:09 +0000 (0:00:01.080) 0:00:34.175 ********** 2026-04-05 07:11:14.632594 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:11:14.632613 | orchestrator | 2026-04-05 07:11:14.632631 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:11:14.632649 | orchestrator | Sunday 05 April 2026 07:11:10 +0000 (0:00:01.188) 0:00:35.363 ********** 2026-04-05 07:11:14.632667 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0 2026-04-05 07:11:14.632687 | orchestrator | 2026-04-05 07:11:14.632706 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-05 07:11:14.632738 | orchestrator | Sunday 05 April 2026 07:11:11 +0000 (0:00:01.488) 0:00:36.852 ********** 2026-04-05 07:11:14.632759 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:11:14.632789 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:14.632821 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:21.494570 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:21.494659 | orchestrator | 2026-04-05 07:11:21.494672 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-05 07:11:21.494681 | orchestrator | Sunday 05 April 2026 07:11:16 +0000 (0:00:04.795) 0:00:41.648 ********** 2026-04-05 07:11:21.494690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:11:21.494723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:11:21.494750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:11:21.494759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:11:21.494767 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:11:21.494775 | orchestrator | 2026-04-05 07:11:21.494799 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-05 07:11:21.494808 | orchestrator | Sunday 05 April 2026 07:11:18 +0000 (0:00:01.793) 0:00:43.441 ********** 2026-04-05 07:11:21.494816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:11:21.494836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:11:21.494844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:11:21.494856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:11:21.494861 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:11:21.494866 | orchestrator | 2026-04-05 07:11:21.494870 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-05 07:11:21.494875 | orchestrator | Sunday 05 April 2026 07:11:20 +0000 (0:00:01.734) 0:00:45.176 ********** 2026-04-05 07:11:21.494885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:11:48.787792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:48.787932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:48.787951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:48.787964 | orchestrator | 2026-04-05 07:11:48.787977 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-05 07:11:48.787989 | orchestrator | Sunday 05 April 2026 07:11:25 +0000 (0:00:05.237) 0:00:50.414 ********** 2026-04-05 07:11:48.788000 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-05 07:11:48.788012 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:11:48.788024 | orchestrator | 2026-04-05 07:11:48.788036 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-05 07:11:48.788060 | orchestrator | Sunday 05 April 2026 07:11:26 +0000 (0:00:01.458) 0:00:51.872 ********** 2026-04-05 07:11:48.788072 | orchestrator | included: service-uwsgi-config for testbed-node-0 2026-04-05 07:11:48.788083 | orchestrator | 2026-04-05 07:11:48.788094 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-05 07:11:48.788104 | orchestrator | Sunday 05 April 2026 07:11:28 +0000 (0:00:01.784) 0:00:53.656 ********** 2026-04-05 07:11:48.788115 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:11:48.788126 | orchestrator | 2026-04-05 07:11:48.788136 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-05 07:11:48.788147 | orchestrator | Sunday 05 April 2026 07:11:31 +0000 (0:00:02.541) 0:00:56.198 ********** 2026-04-05 07:11:48.788161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:11:48.788209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:48.788222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:48.788233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:11:48.788245 | orchestrator | 2026-04-05 07:11:48.788255 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-05 07:11:48.788266 | orchestrator | Sunday 05 April 2026 07:11:43 +0000 (0:00:12.196) 0:01:08.394 ********** 2026-04-05 07:11:48.788277 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:11:48.788287 | orchestrator | 2026-04-05 07:11:48.788298 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-05 07:11:48.788309 | orchestrator | Sunday 05 April 2026 07:11:45 +0000 (0:00:02.366) 0:01:10.761 ********** 2026-04-05 07:11:48.788319 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:11:48.788330 | orchestrator | 2026-04-05 07:11:48.788341 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-05 07:11:48.788357 | orchestrator | Sunday 05 April 2026 07:11:48 +0000 (0:00:02.540) 0:01:13.302 ********** 2026-04-05 07:11:48.788369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:11:48.788395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:12:28.433299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:12:28.433413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:12:28.433430 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:12:28.433516 | orchestrator | 2026-04-05 07:12:28.433536 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-05 07:12:28.433556 | orchestrator | Sunday 05 April 2026 07:11:49 +0000 (0:00:01.734) 0:01:15.036 ********** 2026-04-05 07:12:28.433574 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:12:28.433591 | orchestrator | 2026-04-05 07:12:28.433607 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-05 07:12:28.433627 | orchestrator | Sunday 05 April 2026 07:11:51 +0000 (0:00:01.637) 0:01:16.673 ********** 2026-04-05 07:12:28.433645 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:12:28.433664 | orchestrator | 2026-04-05 07:12:28.433683 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-05 07:12:28.433701 | orchestrator | Sunday 05 April 2026 07:12:26 +0000 (0:00:35.119) 0:01:51.793 ********** 2026-04-05 07:12:28.433739 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:12:28.433778 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:28.433814 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:12:28.433830 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:12:28.433852 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:28.433867 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:28.433890 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:28.433913 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:36.288731 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:36.288844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:36.288877 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:36.288891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:36.288925 | orchestrator | 2026-04-05 07:12:36.288940 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:12:36.288953 | orchestrator | Sunday 05 April 2026 07:12:30 +0000 (0:00:03.538) 0:01:55.332 ********** 2026-04-05 07:12:36.288964 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:12:36.288976 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:12:36.288987 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:12:36.288997 | orchestrator | 2026-04-05 07:12:36.289008 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:12:36.289019 | orchestrator | Sunday 05 April 2026 07:12:31 +0000 (0:00:01.395) 0:01:56.727 ********** 2026-04-05 07:12:36.289030 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:12:36.289041 | orchestrator | 2026-04-05 07:12:36.289052 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-05 07:12:36.289063 | orchestrator | Sunday 05 April 2026 07:12:33 +0000 (0:00:01.517) 0:01:58.245 ********** 2026-04-05 07:12:36.289074 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-05 07:12:36.289084 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-05 07:12:36.289095 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-05 07:12:36.289106 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-05 07:12:36.289116 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-05 07:12:36.289127 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-05 07:12:36.289137 | orchestrator | 2026-04-05 07:12:36.289148 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-05 07:12:36.289175 | orchestrator | Sunday 05 April 2026 07:12:35 +0000 (0:00:02.663) 0:02:00.908 ********** 2026-04-05 07:12:36.289191 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:12:36.289211 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:12:36.289233 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:12:36.289245 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:12:36.289267 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:12:37.661943 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:12:37.662106 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:12:37.662120 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:12:37.662128 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:12:37.662150 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:12:37.662168 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-05 07:12:37.662175 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-05 07:12:37.662182 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:12:37.662195 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:12:41.103508 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:12:41.103652 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:12:41.103669 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:12:41.103681 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:12:41.103711 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:12:41.103734 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:12:41.103752 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-05 07:12:41.103764 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:12:41.103775 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:12:41.103793 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-05 07:12:57.856264 | orchestrator | 2026-04-05 07:12:57.856374 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-05 07:12:57.856391 | orchestrator | Sunday 05 April 2026 07:12:42 +0000 (0:00:06.425) 0:02:07.333 ********** 2026-04-05 07:12:57.856404 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:12:57.856417 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:12:57.856428 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:12:57.856483 | orchestrator | 2026-04-05 07:12:57.856506 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-05 07:12:57.856524 | orchestrator | Sunday 05 April 2026 07:12:45 +0000 (0:00:02.852) 0:02:10.186 ********** 2026-04-05 07:12:57.856563 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:12:57.856582 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:12:57.856600 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-05 07:12:57.856618 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-05 07:12:57.856638 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-05 07:12:57.856657 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-05 07:12:57.856675 | orchestrator | 2026-04-05 07:12:57.856694 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-05 07:12:57.856708 | orchestrator | Sunday 05 April 2026 07:12:48 +0000 (0:00:03.693) 0:02:13.880 ********** 2026-04-05 07:12:57.856720 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-05 07:12:57.856731 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-05 07:12:57.856742 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-05 07:12:57.856753 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-05 07:12:57.856763 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-05 07:12:57.856774 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-05 07:12:57.856785 | orchestrator | 2026-04-05 07:12:57.856798 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-05 07:12:57.856810 | orchestrator | Sunday 05 April 2026 07:12:50 +0000 (0:00:02.062) 0:02:15.942 ********** 2026-04-05 07:12:57.856823 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:12:57.856837 | orchestrator | 2026-04-05 07:12:57.856850 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-05 07:12:57.856862 | orchestrator | Sunday 05 April 2026 07:12:51 +0000 (0:00:01.142) 0:02:17.084 ********** 2026-04-05 07:12:57.856875 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:12:57.856887 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:12:57.856899 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:12:57.856934 | orchestrator | 2026-04-05 07:12:57.856947 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-05 07:12:57.856960 | orchestrator | Sunday 05 April 2026 07:12:53 +0000 (0:00:01.588) 0:02:18.673 ********** 2026-04-05 07:12:57.856973 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:12:57.856987 | orchestrator | 2026-04-05 07:12:57.857000 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-05 07:12:57.857012 | orchestrator | Sunday 05 April 2026 07:12:54 +0000 (0:00:01.331) 0:02:20.005 ********** 2026-04-05 07:12:57.857047 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:12:57.857071 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:12:57.857085 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:12:57.857097 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:57.857118 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:57.857130 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:12:57.857149 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:00.814509 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:00.814595 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:00.814606 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:00.814635 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:00.814643 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:00.814650 | orchestrator | 2026-04-05 07:13:00.814659 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-05 07:13:00.814668 | orchestrator | Sunday 05 April 2026 07:12:59 +0000 (0:00:05.108) 0:02:25.113 ********** 2026-04-05 07:13:00.814696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:00.814706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:00.814727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:00.814735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:00.814742 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:13:00.814751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:00.814768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524271 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:13:02.524279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:02.524285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524327 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:13:02.524332 | orchestrator | 2026-04-05 07:13:02.524338 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-05 07:13:02.524344 | orchestrator | Sunday 05 April 2026 07:13:02 +0000 (0:00:02.028) 0:02:27.141 ********** 2026-04-05 07:13:02.524350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:02.524356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:02.524374 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:13:02.524383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:05.588653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:05.588767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:05.588786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:05.588800 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:13:05.588834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:05.588850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:05.588908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:05.588922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:05.588934 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:13:05.588946 | orchestrator | 2026-04-05 07:13:05.588959 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-05 07:13:05.588972 | orchestrator | Sunday 05 April 2026 07:13:03 +0000 (0:00:01.826) 0:02:28.968 ********** 2026-04-05 07:13:05.588984 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:05.589004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:05.589035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:19.085110 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085433 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:19.085526 | orchestrator | 2026-04-05 07:13:19.085547 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-05 07:13:19.085568 | orchestrator | Sunday 05 April 2026 07:13:09 +0000 (0:00:05.578) 0:02:34.547 ********** 2026-04-05 07:13:19.085601 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-05 07:13:19.085621 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:13:19.085651 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-05 07:13:19.085670 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:13:19.085689 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-05 07:13:19.085708 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:13:19.085727 | orchestrator | 2026-04-05 07:13:19.085747 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-05 07:13:19.085766 | orchestrator | Sunday 05 April 2026 07:13:11 +0000 (0:00:01.754) 0:02:36.301 ********** 2026-04-05 07:13:19.085784 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:13:19.085804 | orchestrator | 2026-04-05 07:13:19.085823 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-05 07:13:19.085842 | orchestrator | Sunday 05 April 2026 07:13:12 +0000 (0:00:01.732) 0:02:38.034 ********** 2026-04-05 07:13:19.085860 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:13:19.085880 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:13:19.085898 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:13:19.085915 | orchestrator | 2026-04-05 07:13:19.085933 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-05 07:13:19.085951 | orchestrator | Sunday 05 April 2026 07:13:15 +0000 (0:00:03.078) 0:02:41.112 ********** 2026-04-05 07:13:19.085984 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:27.596503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:27.596646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:27.596725 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596813 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:27.596939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:34.809627 | orchestrator | 2026-04-05 07:13:34.809721 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-05 07:13:34.809734 | orchestrator | Sunday 05 April 2026 07:13:28 +0000 (0:00:12.684) 0:02:53.797 ********** 2026-04-05 07:13:34.809743 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:13:34.809752 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:13:34.809760 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:13:34.809768 | orchestrator | 2026-04-05 07:13:34.809777 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-05 07:13:34.809785 | orchestrator | Sunday 05 April 2026 07:13:31 +0000 (0:00:02.736) 0:02:56.534 ********** 2026-04-05 07:13:34.809793 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:13:34.809801 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:13:34.809810 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:13:34.809839 | orchestrator | 2026-04-05 07:13:34.809847 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-05 07:13:34.809855 | orchestrator | Sunday 05 April 2026 07:13:34 +0000 (0:00:02.801) 0:02:59.335 ********** 2026-04-05 07:13:34.809869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:34.809894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:34.809904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:34.809913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:34.809922 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:13:34.809946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:34.809963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:34.809975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:34.809984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:34.809992 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:13:34.810001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:34.810064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:40.908940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:40.909048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:40.909062 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:13:40.909073 | orchestrator | 2026-04-05 07:13:40.909082 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-05 07:13:40.909091 | orchestrator | Sunday 05 April 2026 07:13:35 +0000 (0:00:01.682) 0:03:01.018 ********** 2026-04-05 07:13:40.909099 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:13:40.909107 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:13:40.909115 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:13:40.909123 | orchestrator | 2026-04-05 07:13:40.909131 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-05 07:13:40.909139 | orchestrator | Sunday 05 April 2026 07:13:37 +0000 (0:00:01.684) 0:03:02.703 ********** 2026-04-05 07:13:40.909151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:40.909162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:40.909206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:13:40.909222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:40.909232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:40.909240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:40.909249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:40.909270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:44.876649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:44.876767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:44.876785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:44.876797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:13:44.876833 | orchestrator | 2026-04-05 07:13:44.876846 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-05 07:13:44.876859 | orchestrator | Sunday 05 April 2026 07:13:42 +0000 (0:00:05.245) 0:03:07.949 ********** 2026-04-05 07:13:44.876871 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:13:44.876911 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:13:44.876923 | orchestrator | } 2026-04-05 07:13:44.876934 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:13:44.876945 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:13:44.876956 | orchestrator | } 2026-04-05 07:13:44.876967 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:13:44.876977 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:13:44.876988 | orchestrator | } 2026-04-05 07:13:44.876999 | orchestrator | 2026-04-05 07:13:44.877010 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:13:44.877021 | orchestrator | Sunday 05 April 2026 07:13:44 +0000 (0:00:01.418) 0:03:09.367 ********** 2026-04-05 07:13:44.877057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:44.877074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:13:44.877092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:13:44.877104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:13:44.877124 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:13:44.877136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:13:44.877159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:16:06.556616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:16:06.556741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:16:06.556758 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:16:06.556771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:16:06.556804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:16:06.556815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-05 07:16:06.556840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-05 07:16:06.556850 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:16:06.556859 | orchestrator | 2026-04-05 07:16:06.556869 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 07:16:06.556880 | orchestrator | Sunday 05 April 2026 07:13:46 +0000 (0:00:01.838) 0:03:11.206 ********** 2026-04-05 07:16:06.556888 | orchestrator | 2026-04-05 07:16:06.556897 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 07:16:06.556905 | orchestrator | Sunday 05 April 2026 07:13:46 +0000 (0:00:00.444) 0:03:11.651 ********** 2026-04-05 07:16:06.556914 | orchestrator | 2026-04-05 07:16:06.556926 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-05 07:16:06.556935 | orchestrator | Sunday 05 April 2026 07:13:47 +0000 (0:00:00.648) 0:03:12.299 ********** 2026-04-05 07:16:06.556943 | orchestrator | 2026-04-05 07:16:06.556952 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-05 07:16:06.556961 | orchestrator | Sunday 05 April 2026 07:13:47 +0000 (0:00:00.803) 0:03:13.103 ********** 2026-04-05 07:16:06.556969 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:16:06.556985 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:16:06.556994 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:16:06.557002 | orchestrator | 2026-04-05 07:16:06.557011 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-05 07:16:06.557020 | orchestrator | Sunday 05 April 2026 07:14:20 +0000 (0:00:32.537) 0:03:45.641 ********** 2026-04-05 07:16:06.557029 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:16:06.557037 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:16:06.557046 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:16:06.557055 | orchestrator | 2026-04-05 07:16:06.557063 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-05 07:16:06.557073 | orchestrator | Sunday 05 April 2026 07:14:33 +0000 (0:00:12.933) 0:03:58.575 ********** 2026-04-05 07:16:06.557084 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:16:06.557094 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:16:06.557104 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:16:06.557115 | orchestrator | 2026-04-05 07:16:06.557126 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-05 07:16:06.557136 | orchestrator | Sunday 05 April 2026 07:15:09 +0000 (0:00:36.345) 0:04:34.920 ********** 2026-04-05 07:16:06.557147 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:16:06.557157 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:16:06.557166 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:16:06.557177 | orchestrator | 2026-04-05 07:16:06.557187 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-05 07:16:06.557199 | orchestrator | Sunday 05 April 2026 07:15:28 +0000 (0:00:19.061) 0:04:53.981 ********** 2026-04-05 07:16:06.557209 | orchestrator | Pausing for 30 seconds 2026-04-05 07:16:06.557220 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:16:06.557230 | orchestrator | 2026-04-05 07:16:06.557241 | orchestrator | TASK [cinder : Reload cinder services to remove RPC version pin] *************** 2026-04-05 07:16:06.557251 | orchestrator | Sunday 05 April 2026 07:16:00 +0000 (0:00:31.530) 0:05:25.512 ********** 2026-04-05 07:16:06.557262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:16:06.557282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:16:39.361346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:16:39.361426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:39.361544 | orchestrator | 2026-04-05 07:16:39.361550 | orchestrator | TASK [cinder : Running Cinder online schema migration] ************************* 2026-04-05 07:16:39.361556 | orchestrator | Sunday 05 April 2026 07:16:24 +0000 (0:00:24.159) 0:05:49.671 ********** 2026-04-05 07:16:39.361561 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:16:39.361567 | orchestrator | 2026-04-05 07:16:39.361571 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:16:39.361581 | orchestrator | testbed-node-0 : ok=44  changed=13  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 07:16:39.361587 | orchestrator | testbed-node-1 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 07:16:39.361592 | orchestrator | testbed-node-2 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 07:16:39.361596 | orchestrator | 2026-04-05 07:16:39.361601 | orchestrator | 2026-04-05 07:16:39.361606 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:16:39.361614 | orchestrator | Sunday 05 April 2026 07:16:39 +0000 (0:00:14.799) 0:06:04.471 ********** 2026-04-05 07:16:39.783927 | orchestrator | =============================================================================== 2026-04-05 07:16:39.784050 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 36.35s 2026-04-05 07:16:39.784072 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 35.12s 2026-04-05 07:16:39.784090 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.54s 2026-04-05 07:16:39.784131 | orchestrator | cinder : Wait for cinder services to update service versions ----------- 31.53s 2026-04-05 07:16:39.784150 | orchestrator | cinder : Reload cinder services to remove RPC version pin -------------- 24.16s 2026-04-05 07:16:39.784168 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 19.06s 2026-04-05 07:16:39.784186 | orchestrator | cinder : Running Cinder online schema migration ------------------------ 14.80s 2026-04-05 07:16:39.784203 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.93s 2026-04-05 07:16:39.784223 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.68s 2026-04-05 07:16:39.784241 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.20s 2026-04-05 07:16:39.784258 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.43s 2026-04-05 07:16:39.784275 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.20s 2026-04-05 07:16:39.784293 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.58s 2026-04-05 07:16:39.784310 | orchestrator | service-check-containers : cinder | Check containers -------------------- 5.25s 2026-04-05 07:16:39.784328 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.24s 2026-04-05 07:16:39.784346 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.11s 2026-04-05 07:16:39.784364 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.80s 2026-04-05 07:16:39.784381 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.69s 2026-04-05 07:16:39.784399 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.54s 2026-04-05 07:16:39.784417 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.42s 2026-04-05 07:16:39.961382 | orchestrator | + osism apply -a upgrade barbican 2026-04-05 07:16:41.276258 | orchestrator | 2026-04-05 07:16:41 | INFO  | Prepare task for execution of barbican. 2026-04-05 07:16:41.346319 | orchestrator | 2026-04-05 07:16:41 | INFO  | Task 8853971c-a797-47c6-8781-d768f3c4d0f3 (barbican) was prepared for execution. 2026-04-05 07:16:41.346411 | orchestrator | 2026-04-05 07:16:41 | INFO  | It takes a moment until task 8853971c-a797-47c6-8781-d768f3c4d0f3 (barbican) has been started and output is visible here. 2026-04-05 07:16:55.105097 | orchestrator | 2026-04-05 07:16:55.105233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:16:55.105252 | orchestrator | 2026-04-05 07:16:55.105265 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:16:55.105277 | orchestrator | Sunday 05 April 2026 07:16:46 +0000 (0:00:01.766) 0:00:01.766 ********** 2026-04-05 07:16:55.105314 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:16:55.105327 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:16:55.105337 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:16:55.105348 | orchestrator | 2026-04-05 07:16:55.105359 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:16:55.105370 | orchestrator | Sunday 05 April 2026 07:16:48 +0000 (0:00:01.748) 0:00:03.515 ********** 2026-04-05 07:16:55.105382 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-05 07:16:55.105393 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-05 07:16:55.105404 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-05 07:16:55.105415 | orchestrator | 2026-04-05 07:16:55.105426 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-05 07:16:55.105436 | orchestrator | 2026-04-05 07:16:55.105447 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 07:16:55.105458 | orchestrator | Sunday 05 April 2026 07:16:50 +0000 (0:00:02.207) 0:00:05.723 ********** 2026-04-05 07:16:55.105470 | orchestrator | included: /ansible/roles/barbican/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:16:55.105481 | orchestrator | 2026-04-05 07:16:55.105530 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-05 07:16:55.105542 | orchestrator | Sunday 05 April 2026 07:16:52 +0000 (0:00:02.302) 0:00:08.025 ********** 2026-04-05 07:16:55.105558 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:16:55.105594 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:16:55.105619 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:55.105677 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:16:55.105700 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:55.105720 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:55.105749 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:55.105769 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:16:55.105812 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:05.572126 | orchestrator | 2026-04-05 07:17:05.572251 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-05 07:17:05.572271 | orchestrator | Sunday 05 April 2026 07:16:56 +0000 (0:00:03.522) 0:00:11.549 ********** 2026-04-05 07:17:05.572284 | orchestrator | ok: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-05 07:17:05.572295 | orchestrator | ok: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-05 07:17:05.572306 | orchestrator | ok: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-05 07:17:05.572317 | orchestrator | 2026-04-05 07:17:05.572328 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-05 07:17:05.572339 | orchestrator | Sunday 05 April 2026 07:16:58 +0000 (0:00:01.953) 0:00:13.502 ********** 2026-04-05 07:17:05.572350 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:17:05.572362 | orchestrator | 2026-04-05 07:17:05.572372 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-05 07:17:05.572383 | orchestrator | Sunday 05 April 2026 07:16:59 +0000 (0:00:01.193) 0:00:14.696 ********** 2026-04-05 07:17:05.572393 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:17:05.572404 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:17:05.572415 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:17:05.572425 | orchestrator | 2026-04-05 07:17:05.572436 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-05 07:17:05.572447 | orchestrator | Sunday 05 April 2026 07:17:00 +0000 (0:00:01.566) 0:00:16.262 ********** 2026-04-05 07:17:05.572458 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:17:05.572469 | orchestrator | 2026-04-05 07:17:05.572479 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-05 07:17:05.572533 | orchestrator | Sunday 05 April 2026 07:17:02 +0000 (0:00:01.675) 0:00:17.938 ********** 2026-04-05 07:17:05.572568 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:05.572586 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:05.572645 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:05.572661 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:05.572676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:05.572696 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:05.572710 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:05.572733 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:05.572756 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:08.883046 | orchestrator | 2026-04-05 07:17:08.883170 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-05 07:17:08.883199 | orchestrator | Sunday 05 April 2026 07:17:06 +0000 (0:00:04.084) 0:00:22.022 ********** 2026-04-05 07:17:08.883226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:08.883252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:08.883293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:08.883348 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:17:08.883370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:08.883414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:08.883437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:08.883457 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:17:08.883476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:08.883535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:08.883570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:08.883591 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:17:08.883610 | orchestrator | 2026-04-05 07:17:08.883629 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-05 07:17:08.883649 | orchestrator | Sunday 05 April 2026 07:17:08 +0000 (0:00:01.913) 0:00:23.935 ********** 2026-04-05 07:17:08.883675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:11.751241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:11.751369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:11.751397 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:17:11.751469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:11.751487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:11.751565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:11.751577 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:17:11.751610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:11.751623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:11.751650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:11.751662 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:17:11.751673 | orchestrator | 2026-04-05 07:17:11.751686 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-05 07:17:11.751698 | orchestrator | Sunday 05 April 2026 07:17:10 +0000 (0:00:01.652) 0:00:25.588 ********** 2026-04-05 07:17:11.751710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:11.751755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:24.047067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:24.047263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:24.047296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:24.047317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:24.047339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:24.047382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:24.047401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:24.047433 | orchestrator | 2026-04-05 07:17:24.047453 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-05 07:17:24.047472 | orchestrator | Sunday 05 April 2026 07:17:14 +0000 (0:00:04.459) 0:00:30.047 ********** 2026-04-05 07:17:24.047489 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:17:24.047541 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:17:24.047559 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:17:24.047576 | orchestrator | 2026-04-05 07:17:24.047594 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-05 07:17:24.047611 | orchestrator | Sunday 05 April 2026 07:17:17 +0000 (0:00:02.662) 0:00:32.709 ********** 2026-04-05 07:17:24.047629 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:17:24.047650 | orchestrator | 2026-04-05 07:17:24.047678 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-05 07:17:24.047698 | orchestrator | Sunday 05 April 2026 07:17:19 +0000 (0:00:02.377) 0:00:35.087 ********** 2026-04-05 07:17:24.047715 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:17:24.047733 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:17:24.047752 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:17:24.047772 | orchestrator | 2026-04-05 07:17:24.047792 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-05 07:17:24.047811 | orchestrator | Sunday 05 April 2026 07:17:21 +0000 (0:00:01.704) 0:00:36.791 ********** 2026-04-05 07:17:24.047833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:24.047850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:24.047881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:29.931345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:29.931454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:29.931574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:29.931602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:29.931622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:29.931672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:29.931692 | orchestrator | 2026-04-05 07:17:29.931713 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-05 07:17:29.931760 | orchestrator | Sunday 05 April 2026 07:17:29 +0000 (0:00:07.751) 0:00:44.543 ********** 2026-04-05 07:17:29.931794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:29.931819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:29.931840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:29.931862 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:17:29.931880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:29.931913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:33.668744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:33.668844 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:17:33.668862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:33.668875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:33.668887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:33.668920 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:17:33.668931 | orchestrator | 2026-04-05 07:17:33.668942 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-05 07:17:33.668954 | orchestrator | Sunday 05 April 2026 07:17:31 +0000 (0:00:02.302) 0:00:46.845 ********** 2026-04-05 07:17:33.668979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:33.668998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:33.669010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:17:33.669028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:33.669039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:33.669057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:37.652994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:37.653096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:37.653112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:17:37.653147 | orchestrator | 2026-04-05 07:17:37.653161 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-05 07:17:37.653174 | orchestrator | Sunday 05 April 2026 07:17:35 +0000 (0:00:04.071) 0:00:50.917 ********** 2026-04-05 07:17:37.653187 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:17:37.653199 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:17:37.653211 | orchestrator | } 2026-04-05 07:17:37.653222 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:17:37.653232 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:17:37.653243 | orchestrator | } 2026-04-05 07:17:37.653254 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:17:37.653265 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:17:37.653276 | orchestrator | } 2026-04-05 07:17:37.653287 | orchestrator | 2026-04-05 07:17:37.653298 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:17:37.653309 | orchestrator | Sunday 05 April 2026 07:17:36 +0000 (0:00:01.354) 0:00:52.271 ********** 2026-04-05 07:17:37.653324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:37.653355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:37.653375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:37.653388 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:17:37.653400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:17:37.653419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:17:37.653431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:17:37.653442 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:17:37.653462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:20:35.314730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:20:35.314855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:20:35.314897 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:20:35.314914 | orchestrator | 2026-04-05 07:20:35.314927 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-05 07:20:35.314941 | orchestrator | Sunday 05 April 2026 07:17:39 +0000 (0:00:02.443) 0:00:54.715 ********** 2026-04-05 07:20:35.314949 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:20:35.314955 | orchestrator | 2026-04-05 07:20:35.314962 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 07:20:35.314969 | orchestrator | Sunday 05 April 2026 07:17:52 +0000 (0:00:13.633) 0:01:08.348 ********** 2026-04-05 07:20:35.314976 | orchestrator | 2026-04-05 07:20:35.314983 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 07:20:35.314989 | orchestrator | Sunday 05 April 2026 07:17:53 +0000 (0:00:00.452) 0:01:08.801 ********** 2026-04-05 07:20:35.314996 | orchestrator | 2026-04-05 07:20:35.315003 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-05 07:20:35.315009 | orchestrator | Sunday 05 April 2026 07:17:53 +0000 (0:00:00.450) 0:01:09.251 ********** 2026-04-05 07:20:35.315019 | orchestrator | 2026-04-05 07:20:35.315029 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-05 07:20:35.315040 | orchestrator | Sunday 05 April 2026 07:17:54 +0000 (0:00:00.772) 0:01:10.024 ********** 2026-04-05 07:20:35.315051 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:20:35.315062 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:20:35.315073 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:20:35.315084 | orchestrator | 2026-04-05 07:20:35.315094 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-05 07:20:35.315106 | orchestrator | Sunday 05 April 2026 07:20:09 +0000 (0:02:14.434) 0:03:24.458 ********** 2026-04-05 07:20:35.315116 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:20:35.315127 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:20:35.315138 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:20:35.315150 | orchestrator | 2026-04-05 07:20:35.315160 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-05 07:20:35.315172 | orchestrator | Sunday 05 April 2026 07:20:21 +0000 (0:00:12.761) 0:03:37.220 ********** 2026-04-05 07:20:35.315183 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:20:35.315195 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:20:35.315206 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:20:35.315217 | orchestrator | 2026-04-05 07:20:35.315227 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:20:35.315234 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 07:20:35.315284 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:20:35.315292 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:20:35.315300 | orchestrator | 2026-04-05 07:20:35.315308 | orchestrator | 2026-04-05 07:20:35.315316 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:20:35.315324 | orchestrator | Sunday 05 April 2026 07:20:34 +0000 (0:00:13.079) 0:03:50.299 ********** 2026-04-05 07:20:35.315331 | orchestrator | =============================================================================== 2026-04-05 07:20:35.315348 | orchestrator | barbican : Restart barbican-api container ----------------------------- 134.43s 2026-04-05 07:20:35.315355 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.63s 2026-04-05 07:20:35.315363 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.08s 2026-04-05 07:20:35.315371 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.76s 2026-04-05 07:20:35.315379 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.75s 2026-04-05 07:20:35.315411 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.46s 2026-04-05 07:20:35.315424 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.08s 2026-04-05 07:20:35.315436 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.07s 2026-04-05 07:20:35.315447 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.52s 2026-04-05 07:20:35.315458 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.66s 2026-04-05 07:20:35.315470 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.44s 2026-04-05 07:20:35.315482 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.38s 2026-04-05 07:20:35.315495 | orchestrator | barbican : include_tasks ------------------------------------------------ 2.30s 2026-04-05 07:20:35.315506 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.30s 2026-04-05 07:20:35.315517 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.21s 2026-04-05 07:20:35.315530 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.95s 2026-04-05 07:20:35.315541 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.91s 2026-04-05 07:20:35.315554 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.75s 2026-04-05 07:20:35.315563 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.70s 2026-04-05 07:20:35.315572 | orchestrator | barbican : Flush handlers ----------------------------------------------- 1.68s 2026-04-05 07:20:35.519469 | orchestrator | + osism apply -a upgrade designate 2026-04-05 07:20:36.784517 | orchestrator | 2026-04-05 07:20:36 | INFO  | Prepare task for execution of designate. 2026-04-05 07:20:36.850738 | orchestrator | 2026-04-05 07:20:36 | INFO  | Task 8d8ec6f1-95c2-44bd-a234-b35b4a304ff3 (designate) was prepared for execution. 2026-04-05 07:20:36.850804 | orchestrator | 2026-04-05 07:20:36 | INFO  | It takes a moment until task 8d8ec6f1-95c2-44bd-a234-b35b4a304ff3 (designate) has been started and output is visible here. 2026-04-05 07:20:51.423164 | orchestrator | 2026-04-05 07:20:51.423329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:20:51.423349 | orchestrator | 2026-04-05 07:20:51.423361 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:20:51.423372 | orchestrator | Sunday 05 April 2026 07:20:41 +0000 (0:00:01.603) 0:00:01.603 ********** 2026-04-05 07:20:51.423383 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:20:51.423394 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:20:51.423405 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:20:51.423415 | orchestrator | 2026-04-05 07:20:51.423426 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:20:51.423438 | orchestrator | Sunday 05 April 2026 07:20:43 +0000 (0:00:01.727) 0:00:03.330 ********** 2026-04-05 07:20:51.423449 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-05 07:20:51.423461 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-05 07:20:51.423471 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-05 07:20:51.423482 | orchestrator | 2026-04-05 07:20:51.423492 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-05 07:20:51.423503 | orchestrator | 2026-04-05 07:20:51.423514 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 07:20:51.423552 | orchestrator | Sunday 05 April 2026 07:20:45 +0000 (0:00:01.662) 0:00:04.993 ********** 2026-04-05 07:20:51.423564 | orchestrator | included: /ansible/roles/designate/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:20:51.423575 | orchestrator | 2026-04-05 07:20:51.423585 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-05 07:20:51.423596 | orchestrator | Sunday 05 April 2026 07:20:48 +0000 (0:00:03.298) 0:00:08.292 ********** 2026-04-05 07:20:51.423610 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:20:51.423645 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:20:51.423659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:20:51.423691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:20:51.423714 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:20:51.423729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:51.423747 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:51.423761 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:20:51.423773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:51.423795 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844642 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844799 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844840 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844882 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844952 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844967 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:20:59.844979 | orchestrator | 2026-04-05 07:20:59.844993 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-05 07:20:59.845005 | orchestrator | Sunday 05 April 2026 07:20:53 +0000 (0:00:05.318) 0:00:13.610 ********** 2026-04-05 07:20:59.845016 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:20:59.845029 | orchestrator | 2026-04-05 07:20:59.845040 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-05 07:20:59.845050 | orchestrator | Sunday 05 April 2026 07:20:54 +0000 (0:00:01.192) 0:00:14.803 ********** 2026-04-05 07:20:59.845061 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:20:59.845072 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:20:59.845082 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:20:59.845093 | orchestrator | 2026-04-05 07:20:59.845104 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 07:20:59.845114 | orchestrator | Sunday 05 April 2026 07:20:56 +0000 (0:00:01.497) 0:00:16.300 ********** 2026-04-05 07:20:59.845126 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:20:59.845137 | orchestrator | 2026-04-05 07:20:59.845147 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-05 07:20:59.845158 | orchestrator | Sunday 05 April 2026 07:20:58 +0000 (0:00:01.935) 0:00:18.235 ********** 2026-04-05 07:20:59.845176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:20:59.845223 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:20:59.845272 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:03.935882 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936009 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936026 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936040 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936075 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936088 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936117 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936131 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936148 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936160 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936178 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936247 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:03.936266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:06.363546 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:06.363684 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:06.363714 | orchestrator | 2026-04-05 07:21:06.363738 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-05 07:21:06.363753 | orchestrator | Sunday 05 April 2026 07:21:05 +0000 (0:00:06.970) 0:00:25.205 ********** 2026-04-05 07:21:06.363767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:06.363805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:06.363819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:06.363849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:06.363868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:06.363880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:06.363900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:06.363913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:06.363933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.747704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:08.747850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.747896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.747910 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:21:08.747925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.747937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.747950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.747981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.747993 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:21:08.748011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.748031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:08.748042 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:21:08.748054 | orchestrator | 2026-04-05 07:21:08.748066 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-05 07:21:08.748078 | orchestrator | Sunday 05 April 2026 07:21:07 +0000 (0:00:02.545) 0:00:27.751 ********** 2026-04-05 07:21:08.748091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:08.748106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:08.748127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:09.255308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:09.256348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:09.256363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:09.256425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256449 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:21:09.256462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:09.256548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:13.753096 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:21:13.753219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:13.753240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:13.753253 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:21:13.753264 | orchestrator | 2026-04-05 07:21:13.753276 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-05 07:21:13.753288 | orchestrator | Sunday 05 April 2026 07:21:10 +0000 (0:00:02.595) 0:00:30.346 ********** 2026-04-05 07:21:13.753301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:13.753316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:13.753387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:13.753402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:13.753415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:13.753426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:13.753438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:13.753457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:13.753481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:20.728824 | orchestrator | 2026-04-05 07:21:20.728833 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-05 07:21:20.728842 | orchestrator | Sunday 05 April 2026 07:21:17 +0000 (0:00:07.122) 0:00:37.469 ********** 2026-04-05 07:21:20.728850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:20.728868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:20.728887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:31.958099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:31.958458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:43.411551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:43.411662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:43.411699 | orchestrator | 2026-04-05 07:21:43.411712 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-05 07:21:43.411724 | orchestrator | Sunday 05 April 2026 07:21:33 +0000 (0:00:16.006) 0:00:53.475 ********** 2026-04-05 07:21:43.411734 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 07:21:43.411745 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 07:21:43.411754 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-05 07:21:43.411764 | orchestrator | 2026-04-05 07:21:43.411774 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-05 07:21:43.411783 | orchestrator | Sunday 05 April 2026 07:21:38 +0000 (0:00:04.712) 0:00:58.187 ********** 2026-04-05 07:21:43.411793 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 07:21:43.411803 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 07:21:43.411812 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-05 07:21:43.411822 | orchestrator | 2026-04-05 07:21:43.411832 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-05 07:21:43.411842 | orchestrator | Sunday 05 April 2026 07:21:41 +0000 (0:00:03.434) 0:01:01.622 ********** 2026-04-05 07:21:43.411867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:43.411898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:43.411910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:43.411929 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:43.411941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:43.411957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:43.411967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:43.411984 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:46.443793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:46.443900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:46.443911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:46.443930 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:46.443939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:46.443947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:46.443969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:46.443982 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:46.443990 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:46.443998 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:46.444005 | orchestrator | 2026-04-05 07:21:46.444013 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-05 07:21:46.444022 | orchestrator | Sunday 05 April 2026 07:21:45 +0000 (0:00:03.906) 0:01:05.528 ********** 2026-04-05 07:21:46.444033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:46.444049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:47.403748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:47.403840 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:47.403867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.403877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.403885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.403925 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:47.403935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.403943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.403952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.403964 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:47.403973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.403987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:47.404002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:51.372099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:51.372246 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:51.372276 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:51.372284 | orchestrator | 2026-04-05 07:21:51.372292 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-05 07:21:51.372300 | orchestrator | Sunday 05 April 2026 07:21:49 +0000 (0:00:03.690) 0:01:09.218 ********** 2026-04-05 07:21:51.372307 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:21:51.372315 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:21:51.372322 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:21:51.372328 | orchestrator | 2026-04-05 07:21:51.372354 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-05 07:21:51.372361 | orchestrator | Sunday 05 April 2026 07:21:50 +0000 (0:00:01.365) 0:01:10.584 ********** 2026-04-05 07:21:51.372370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:51.372380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:51.372405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:51.372414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:51.372425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:51.372433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:51.372448 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:21:51.372455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:51.372467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:54.585277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585390 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:21:54.585396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:21:54.585413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:21:54.585418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:21:54.585441 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:21:54.585445 | orchestrator | 2026-04-05 07:21:54.585449 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-05 07:21:54.585454 | orchestrator | Sunday 05 April 2026 07:21:52 +0000 (0:00:02.175) 0:01:12.760 ********** 2026-04-05 07:21:54.585458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:54.585466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:57.959394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:21:57.959529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:21:57.959693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-05 07:22:01.844716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:22:01.844844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:22:01.844873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:22:01.844894 | orchestrator | 2026-04-05 07:22:01.844911 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-05 07:22:01.844923 | orchestrator | Sunday 05 April 2026 07:21:59 +0000 (0:00:06.997) 0:01:19.757 ********** 2026-04-05 07:22:01.844936 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:22:01.844948 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:22:01.844959 | orchestrator | } 2026-04-05 07:22:01.844972 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:22:01.844990 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:22:01.845009 | orchestrator | } 2026-04-05 07:22:01.845027 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:22:01.845087 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:22:01.845109 | orchestrator | } 2026-04-05 07:22:01.845128 | orchestrator | 2026-04-05 07:22:01.845146 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:22:01.845163 | orchestrator | Sunday 05 April 2026 07:22:01 +0000 (0:00:01.391) 0:01:21.149 ********** 2026-04-05 07:22:01.845182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:22:01.845267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:22:01.845305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:22:01.845327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:22:01.845348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:22:01.845370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:22:01.845386 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:22:01.845400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:22:01.845432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:22:49.525453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525629 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:22:49.525644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:22:49.525682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-05 07:22:49.525697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:22:49.525750 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:22:49.525762 | orchestrator | 2026-04-05 07:22:49.525774 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-05 07:22:49.525786 | orchestrator | Sunday 05 April 2026 07:22:03 +0000 (0:00:02.083) 0:01:23.233 ********** 2026-04-05 07:22:49.525797 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:22:49.525808 | orchestrator | 2026-04-05 07:22:49.525819 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 07:22:49.525830 | orchestrator | Sunday 05 April 2026 07:22:18 +0000 (0:00:15.238) 0:01:38.472 ********** 2026-04-05 07:22:49.525840 | orchestrator | 2026-04-05 07:22:49.525851 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 07:22:49.525868 | orchestrator | Sunday 05 April 2026 07:22:19 +0000 (0:00:00.595) 0:01:39.067 ********** 2026-04-05 07:22:49.525886 | orchestrator | 2026-04-05 07:22:49.525904 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-05 07:22:49.525922 | orchestrator | Sunday 05 April 2026 07:22:19 +0000 (0:00:00.434) 0:01:39.501 ********** 2026-04-05 07:22:49.525940 | orchestrator | 2026-04-05 07:22:49.526097 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-05 07:22:49.526120 | orchestrator | Sunday 05 April 2026 07:22:20 +0000 (0:00:00.791) 0:01:40.293 ********** 2026-04-05 07:22:49.526139 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:22:49.526159 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:22:49.526178 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:22:49.526197 | orchestrator | 2026-04-05 07:22:49.526216 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-05 07:22:49.526234 | orchestrator | Sunday 05 April 2026 07:22:35 +0000 (0:00:15.557) 0:01:55.850 ********** 2026-04-05 07:22:49.526253 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:22:49.526272 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:22:49.526291 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:22:49.526308 | orchestrator | 2026-04-05 07:22:49.526325 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-05 07:22:49.526367 | orchestrator | Sunday 05 April 2026 07:22:49 +0000 (0:00:13.542) 0:02:09.392 ********** 2026-04-05 07:24:43.089517 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:24:43.089618 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:24:43.089629 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:24:43.089635 | orchestrator | 2026-04-05 07:24:43.089642 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-05 07:24:43.089650 | orchestrator | Sunday 05 April 2026 07:23:02 +0000 (0:00:13.426) 0:02:22.819 ********** 2026-04-05 07:24:43.089657 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:24:43.089664 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:24:43.089671 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:24:43.089679 | orchestrator | 2026-04-05 07:24:43.089685 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-05 07:24:43.089693 | orchestrator | Sunday 05 April 2026 07:24:06 +0000 (0:01:03.538) 0:03:26.358 ********** 2026-04-05 07:24:43.089700 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:24:43.089707 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:24:43.089714 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:24:43.089721 | orchestrator | 2026-04-05 07:24:43.089758 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-05 07:24:43.089766 | orchestrator | Sunday 05 April 2026 07:24:19 +0000 (0:00:13.447) 0:03:39.806 ********** 2026-04-05 07:24:43.089774 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:24:43.089801 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:24:43.089809 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:24:43.089815 | orchestrator | 2026-04-05 07:24:43.089822 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-05 07:24:43.089829 | orchestrator | Sunday 05 April 2026 07:24:34 +0000 (0:00:14.245) 0:03:54.052 ********** 2026-04-05 07:24:43.089836 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:24:43.089843 | orchestrator | 2026-04-05 07:24:43.089849 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:24:43.089857 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-05 07:24:43.089866 | orchestrator | testbed-node-1 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:24:43.089873 | orchestrator | testbed-node-2 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:24:43.089879 | orchestrator | 2026-04-05 07:24:43.089886 | orchestrator | 2026-04-05 07:24:43.089892 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:24:43.089899 | orchestrator | Sunday 05 April 2026 07:24:42 +0000 (0:00:08.574) 0:04:02.627 ********** 2026-04-05 07:24:43.089906 | orchestrator | =============================================================================== 2026-04-05 07:24:43.089913 | orchestrator | designate : Restart designate-producer container ----------------------- 63.54s 2026-04-05 07:24:43.089919 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.01s 2026-04-05 07:24:43.089926 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.56s 2026-04-05 07:24:43.089932 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.24s 2026-04-05 07:24:43.089938 | orchestrator | designate : Restart designate-worker container ------------------------- 14.24s 2026-04-05 07:24:43.089945 | orchestrator | designate : Restart designate-api container ---------------------------- 13.54s 2026-04-05 07:24:43.089951 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.45s 2026-04-05 07:24:43.089958 | orchestrator | designate : Restart designate-central container ------------------------ 13.43s 2026-04-05 07:24:43.089965 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.58s 2026-04-05 07:24:43.089971 | orchestrator | designate : Copying over config.json files for services ----------------- 7.12s 2026-04-05 07:24:43.089978 | orchestrator | service-check-containers : designate | Check containers ----------------- 7.00s 2026-04-05 07:24:43.089985 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.97s 2026-04-05 07:24:43.089991 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.32s 2026-04-05 07:24:43.089998 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.71s 2026-04-05 07:24:43.090005 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.91s 2026-04-05 07:24:43.090012 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.69s 2026-04-05 07:24:43.090065 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.44s 2026-04-05 07:24:43.090072 | orchestrator | designate : include_tasks ----------------------------------------------- 3.30s 2026-04-05 07:24:43.090080 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 2.60s 2026-04-05 07:24:43.090087 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 2.55s 2026-04-05 07:24:43.270402 | orchestrator | + osism apply -a upgrade ceilometer 2026-04-05 07:24:44.561909 | orchestrator | 2026-04-05 07:24:44 | INFO  | Prepare task for execution of ceilometer. 2026-04-05 07:24:44.632592 | orchestrator | 2026-04-05 07:24:44 | INFO  | Task 2d2e3be8-4e0f-4b46-9d78-7f557d1547da (ceilometer) was prepared for execution. 2026-04-05 07:24:44.632709 | orchestrator | 2026-04-05 07:24:44 | INFO  | It takes a moment until task 2d2e3be8-4e0f-4b46-9d78-7f557d1547da (ceilometer) has been started and output is visible here. 2026-04-05 07:25:04.393191 | orchestrator | 2026-04-05 07:25:04.393316 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:25:04.393334 | orchestrator | 2026-04-05 07:25:04.393346 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:25:04.393358 | orchestrator | Sunday 05 April 2026 07:24:49 +0000 (0:00:01.466) 0:00:01.466 ********** 2026-04-05 07:25:04.393370 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:25:04.393382 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:25:04.393393 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:25:04.393403 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:25:04.393414 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:25:04.393424 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:25:04.393435 | orchestrator | 2026-04-05 07:25:04.393446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:25:04.393457 | orchestrator | Sunday 05 April 2026 07:24:52 +0000 (0:00:02.687) 0:00:04.154 ********** 2026-04-05 07:25:04.393469 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-05 07:25:04.393480 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-05 07:25:04.393490 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-05 07:25:04.393501 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-05 07:25:04.393511 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-05 07:25:04.393522 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-05 07:25:04.393533 | orchestrator | 2026-04-05 07:25:04.393543 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-05 07:25:04.393554 | orchestrator | 2026-04-05 07:25:04.393565 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-05 07:25:04.393575 | orchestrator | Sunday 05 April 2026 07:24:54 +0000 (0:00:02.164) 0:00:06.318 ********** 2026-04-05 07:25:04.393587 | orchestrator | included: /ansible/roles/ceilometer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 07:25:04.393599 | orchestrator | 2026-04-05 07:25:04.393610 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-05 07:25:04.393620 | orchestrator | Sunday 05 April 2026 07:24:57 +0000 (0:00:02.742) 0:00:09.061 ********** 2026-04-05 07:25:04.393634 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:04.393650 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:04.393663 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:04.393763 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:04.393785 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:04.393799 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:04.393814 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:04.393828 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:04.393851 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:04.393865 | orchestrator | 2026-04-05 07:25:04.393878 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-05 07:25:04.393892 | orchestrator | Sunday 05 April 2026 07:25:01 +0000 (0:00:04.127) 0:00:13.188 ********** 2026-04-05 07:25:04.393905 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:25:04.393916 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 07:25:04.393927 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 07:25:04.393938 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 07:25:04.393949 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 07:25:04.393959 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 07:25:04.393970 | orchestrator | 2026-04-05 07:25:04.393981 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-05 07:25:04.393993 | orchestrator | Sunday 05 April 2026 07:25:04 +0000 (0:00:03.017) 0:00:16.206 ********** 2026-04-05 07:25:04.394004 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:25:04.394082 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:25:11.843873 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:25:11.844008 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:25:11.844028 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:25:11.844039 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:25:11.844050 | orchestrator | 2026-04-05 07:25:11.844063 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-05 07:25:11.844075 | orchestrator | Sunday 05 April 2026 07:25:05 +0000 (0:00:01.798) 0:00:18.005 ********** 2026-04-05 07:25:11.844086 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:11.844097 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:11.844108 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:11.844118 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:11.844129 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:11.844139 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:11.844150 | orchestrator | 2026-04-05 07:25:11.844161 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-05 07:25:11.844173 | orchestrator | Sunday 05 April 2026 07:25:07 +0000 (0:00:01.857) 0:00:19.862 ********** 2026-04-05 07:25:11.844183 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:25:11.844194 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:25:11.844204 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:25:11.844214 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:25:11.844225 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:25:11.844235 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:25:11.844245 | orchestrator | 2026-04-05 07:25:11.844256 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-05 07:25:11.844267 | orchestrator | Sunday 05 April 2026 07:25:09 +0000 (0:00:01.756) 0:00:21.618 ********** 2026-04-05 07:25:11.844281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:11.844319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:11.844332 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:11.844344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:11.844355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:11.844385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:11.844435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:11.844450 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:11.844464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:11.844486 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:11.844500 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:11.844514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:11.844529 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:11.844549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:11.844567 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:11.844587 | orchestrator | 2026-04-05 07:25:11.844608 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-05 07:25:11.844627 | orchestrator | Sunday 05 April 2026 07:25:11 +0000 (0:00:02.055) 0:00:23.674 ********** 2026-04-05 07:25:11.844644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:11.844673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:26.695306 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:26.695446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:26.695504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:26.695525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:26.695543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:26.695562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:26.695581 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:26.695600 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:26.695618 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:26.695686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:26.695709 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:26.695729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:26.695761 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:26.695779 | orchestrator | 2026-04-05 07:25:26.695800 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-05 07:25:26.695820 | orchestrator | Sunday 05 April 2026 07:25:13 +0000 (0:00:02.099) 0:00:25.773 ********** 2026-04-05 07:25:26.695840 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:25:26.695859 | orchestrator | 2026-04-05 07:25:26.695880 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-05 07:25:26.695900 | orchestrator | Sunday 05 April 2026 07:25:15 +0000 (0:00:01.869) 0:00:27.643 ********** 2026-04-05 07:25:26.695920 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:25:26.695939 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:25:26.695958 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:25:26.695979 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:25:26.695998 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:25:26.696017 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:25:26.696036 | orchestrator | 2026-04-05 07:25:26.696055 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-05 07:25:26.696075 | orchestrator | Sunday 05 April 2026 07:25:17 +0000 (0:00:01.770) 0:00:29.413 ********** 2026-04-05 07:25:26.696095 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:25:26.696114 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:25:26.696133 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:25:26.696152 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:25:26.696171 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:25:26.696189 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:25:26.696207 | orchestrator | 2026-04-05 07:25:26.696226 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-05 07:25:26.696243 | orchestrator | Sunday 05 April 2026 07:25:19 +0000 (0:00:02.284) 0:00:31.698 ********** 2026-04-05 07:25:26.696261 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:26.696279 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:26.696297 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:26.696315 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:26.696333 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:26.696351 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:26.696368 | orchestrator | 2026-04-05 07:25:26.696386 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-05 07:25:26.696404 | orchestrator | Sunday 05 April 2026 07:25:21 +0000 (0:00:01.909) 0:00:33.608 ********** 2026-04-05 07:25:26.696421 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:26.696439 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:26.696457 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:26.696474 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:26.696492 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:26.696509 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:26.696527 | orchestrator | 2026-04-05 07:25:26.696545 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-05 07:25:26.696562 | orchestrator | Sunday 05 April 2026 07:25:23 +0000 (0:00:02.016) 0:00:35.625 ********** 2026-04-05 07:25:26.696580 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:25:26.696597 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 07:25:26.696624 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 07:25:26.696641 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 07:25:26.696683 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 07:25:26.696703 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 07:25:26.696722 | orchestrator | 2026-04-05 07:25:26.696740 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-05 07:25:26.696758 | orchestrator | Sunday 05 April 2026 07:25:26 +0000 (0:00:02.885) 0:00:38.510 ********** 2026-04-05 07:25:26.696777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:26.696812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:33.684789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:33.684877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:33.684887 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:33.684896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:33.684926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:33.684934 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:33.684941 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:33.684948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:33.684957 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:33.684977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:33.684984 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:33.684992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:33.684999 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:33.685006 | orchestrator | 2026-04-05 07:25:33.685014 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-05 07:25:33.685022 | orchestrator | Sunday 05 April 2026 07:25:28 +0000 (0:00:02.064) 0:00:40.575 ********** 2026-04-05 07:25:33.685029 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:33.685036 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:33.685043 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:33.685050 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:33.685056 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:33.685063 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:33.685070 | orchestrator | 2026-04-05 07:25:33.685077 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-05 07:25:33.685084 | orchestrator | Sunday 05 April 2026 07:25:30 +0000 (0:00:01.930) 0:00:42.506 ********** 2026-04-05 07:25:33.685091 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:25:33.685103 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 07:25:33.685110 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 07:25:33.685117 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 07:25:33.685123 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 07:25:33.685130 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 07:25:33.685137 | orchestrator | 2026-04-05 07:25:33.685144 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-05 07:25:33.685151 | orchestrator | Sunday 05 April 2026 07:25:33 +0000 (0:00:02.880) 0:00:45.386 ********** 2026-04-05 07:25:33.685158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:33.685166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:33.685173 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:33.685180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:33.685192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:44.304937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:44.305087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:44.305105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:44.305118 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:44.305131 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:44.305142 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:44.305154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:44.305166 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:44.305185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:44.305204 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:44.305222 | orchestrator | 2026-04-05 07:25:44.305241 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-05 07:25:44.305259 | orchestrator | Sunday 05 April 2026 07:25:35 +0000 (0:00:02.225) 0:00:47.612 ********** 2026-04-05 07:25:44.305276 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:44.305295 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:44.305313 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:44.305329 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:44.305348 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:44.305366 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:44.305384 | orchestrator | 2026-04-05 07:25:44.305403 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-05 07:25:44.305445 | orchestrator | Sunday 05 April 2026 07:25:37 +0000 (0:00:01.715) 0:00:49.327 ********** 2026-04-05 07:25:44.305481 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:44.305501 | orchestrator | 2026-04-05 07:25:44.305520 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-05 07:25:44.305573 | orchestrator | Sunday 05 April 2026 07:25:38 +0000 (0:00:01.120) 0:00:50.448 ********** 2026-04-05 07:25:44.305592 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:44.305611 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:44.305660 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:44.305681 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:44.305694 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:44.305707 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:44.305720 | orchestrator | 2026-04-05 07:25:44.305733 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-05 07:25:44.305745 | orchestrator | Sunday 05 April 2026 07:25:40 +0000 (0:00:02.029) 0:00:52.478 ********** 2026-04-05 07:25:44.305759 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 07:25:44.305773 | orchestrator | 2026-04-05 07:25:44.305787 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-05 07:25:44.305798 | orchestrator | Sunday 05 April 2026 07:25:42 +0000 (0:00:02.405) 0:00:54.884 ********** 2026-04-05 07:25:44.305810 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:44.305823 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:44.305844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:44.305863 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:44.305912 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:47.043376 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:47.043477 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:47.043493 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:47.043504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:47.043516 | orchestrator | 2026-04-05 07:25:47.043529 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-05 07:25:47.043542 | orchestrator | Sunday 05 April 2026 07:25:46 +0000 (0:00:03.245) 0:00:58.130 ********** 2026-04-05 07:25:47.043554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:47.043595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:47.043672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:47.043687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:47.043699 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:47.043711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:47.043723 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:47.043734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:47.043745 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:47.043756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:47.043774 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:47.043793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310252 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:52.310359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310378 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:52.310390 | orchestrator | 2026-04-05 07:25:52.310402 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-05 07:25:52.310414 | orchestrator | Sunday 05 April 2026 07:25:48 +0000 (0:00:02.250) 0:01:00.380 ********** 2026-04-05 07:25:52.310426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:52.310439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:52.310452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:25:52.310534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310545 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:25:52.310557 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:25:52.310567 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:25:52.310579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310590 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:25:52.310602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310668 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:25:52.310681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:25:52.310693 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:25:52.310704 | orchestrator | 2026-04-05 07:25:52.310716 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-05 07:25:52.310727 | orchestrator | Sunday 05 April 2026 07:25:50 +0000 (0:00:02.612) 0:01:02.993 ********** 2026-04-05 07:25:52.310739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:52.310759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:57.691330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:57.691438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691574 | orchestrator | 2026-04-05 07:25:57.691586 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-05 07:25:57.691599 | orchestrator | Sunday 05 April 2026 07:25:54 +0000 (0:00:03.490) 0:01:06.484 ********** 2026-04-05 07:25:57.691671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:57.691695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:25:57.691707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:25:57.691770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:26:15.465713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:15.465821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:15.465865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:15.465879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:15.465891 | orchestrator | 2026-04-05 07:26:15.465906 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-05 07:26:15.465918 | orchestrator | Sunday 05 April 2026 07:26:00 +0000 (0:00:06.482) 0:01:12.966 ********** 2026-04-05 07:26:15.465929 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:26:15.465941 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 07:26:15.465951 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 07:26:15.465962 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 07:26:15.465972 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 07:26:15.465999 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 07:26:15.466076 | orchestrator | 2026-04-05 07:26:15.466090 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-05 07:26:15.466101 | orchestrator | Sunday 05 April 2026 07:26:03 +0000 (0:00:02.903) 0:01:15.870 ********** 2026-04-05 07:26:15.466112 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:26:15.466123 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:26:15.466133 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:26:15.466144 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:26:15.466155 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:26:15.466165 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:26:15.466176 | orchestrator | 2026-04-05 07:26:15.466189 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-05 07:26:15.466203 | orchestrator | Sunday 05 April 2026 07:26:05 +0000 (0:00:01.765) 0:01:17.636 ********** 2026-04-05 07:26:15.466215 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:26:15.466228 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:26:15.466240 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:26:15.466253 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:26:15.466266 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:26:15.466278 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:26:15.466291 | orchestrator | 2026-04-05 07:26:15.466304 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-05 07:26:15.466317 | orchestrator | Sunday 05 April 2026 07:26:08 +0000 (0:00:02.626) 0:01:20.263 ********** 2026-04-05 07:26:15.466338 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:26:15.466351 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:26:15.466363 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:26:15.466376 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:26:15.466406 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:26:15.466419 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:26:15.466432 | orchestrator | 2026-04-05 07:26:15.466444 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-05 07:26:15.466456 | orchestrator | Sunday 05 April 2026 07:26:10 +0000 (0:00:02.377) 0:01:22.640 ********** 2026-04-05 07:26:15.466469 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:26:15.466481 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 07:26:15.466494 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 07:26:15.466506 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 07:26:15.466519 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 07:26:15.466531 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 07:26:15.466543 | orchestrator | 2026-04-05 07:26:15.466554 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-05 07:26:15.466565 | orchestrator | Sunday 05 April 2026 07:26:13 +0000 (0:00:02.964) 0:01:25.605 ********** 2026-04-05 07:26:15.466576 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:26:15.466618 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:26:15.466639 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:26:15.466659 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:15.466689 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:15.466710 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:17.833161 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:17.833265 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:17.833283 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:17.833296 | orchestrator | 2026-04-05 07:26:17.833309 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-05 07:26:17.833321 | orchestrator | Sunday 05 April 2026 07:26:17 +0000 (0:00:03.465) 0:01:29.070 ********** 2026-04-05 07:26:17.833333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:17.833373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:17.833386 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:26:17.833414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:17.833427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:17.833438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:17.833450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:17.833461 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:26:17.833472 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:26:17.833483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:17.833501 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:26:17.833513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:17.833523 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:26:17.833541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:24.749053 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:26:24.749145 | orchestrator | 2026-04-05 07:26:24.749156 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-05 07:26:24.749166 | orchestrator | Sunday 05 April 2026 07:26:19 +0000 (0:00:02.023) 0:01:31.094 ********** 2026-04-05 07:26:24.749173 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:26:24.749181 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:26:24.749188 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:26:24.749195 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:26:24.749202 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:26:24.749209 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:26:24.749216 | orchestrator | 2026-04-05 07:26:24.749224 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-05 07:26:24.749231 | orchestrator | Sunday 05 April 2026 07:26:20 +0000 (0:00:01.899) 0:01:32.993 ********** 2026-04-05 07:26:24.749241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:24.749252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:24.749282 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:26:24.749290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:24.749298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:24.749306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:24.749327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:24.749336 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:26:24.749343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:24.749351 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:26:24.749358 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:26:24.749372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:24.749379 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:26:24.749387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:24.749394 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:26:24.749401 | orchestrator | 2026-04-05 07:26:24.749408 | orchestrator | TASK [service-check-containers : ceilometer | Check containers] **************** 2026-04-05 07:26:24.749415 | orchestrator | Sunday 05 April 2026 07:26:23 +0000 (0:00:02.588) 0:01:35.582 ********** 2026-04-05 07:26:24.749423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:26:24.749436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:26:29.001952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-05 07:26:29.002151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:29.002198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:29.002212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:29.002225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:29.002237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:29.002269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-05 07:26:29.002282 | orchestrator | 2026-04-05 07:26:29.002295 | orchestrator | TASK [service-check-containers : ceilometer | Notify handlers to restart containers] *** 2026-04-05 07:26:29.002315 | orchestrator | Sunday 05 April 2026 07:26:26 +0000 (0:00:03.242) 0:01:38.824 ********** 2026-04-05 07:26:29.002327 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:26:29.002338 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:26:29.002349 | orchestrator | } 2026-04-05 07:26:29.002361 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:26:29.002372 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:26:29.002382 | orchestrator | } 2026-04-05 07:26:29.002393 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:26:29.002404 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:26:29.002415 | orchestrator | } 2026-04-05 07:26:29.002426 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 07:26:29.002436 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:26:29.002447 | orchestrator | } 2026-04-05 07:26:29.002458 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 07:26:29.002469 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:26:29.002479 | orchestrator | } 2026-04-05 07:26:29.002490 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 07:26:29.002501 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:26:29.002512 | orchestrator | } 2026-04-05 07:26:29.002523 | orchestrator | 2026-04-05 07:26:29.002560 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:26:29.002625 | orchestrator | Sunday 05 April 2026 07:26:28 +0000 (0:00:01.786) 0:01:40.611 ********** 2026-04-05 07:26:29.002638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:29.002650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:29.002662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:26:29.002673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:26:29.002701 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:27:23.093555 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:27:23.093680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-05 07:27:23.093703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:27:23.093717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:27:23.093731 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:27:23.093743 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:27:23.093755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:27:23.093766 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:27:23.093778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-05 07:27:23.093815 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:27:23.093827 | orchestrator | 2026-04-05 07:27:23.093839 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-05 07:27:23.093851 | orchestrator | Sunday 05 April 2026 07:26:31 +0000 (0:00:02.959) 0:01:43.571 ********** 2026-04-05 07:27:23.093862 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:27:23.093873 | orchestrator | 2026-04-05 07:27:23.093884 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 07:27:23.093895 | orchestrator | Sunday 05 April 2026 07:26:40 +0000 (0:00:09.482) 0:01:53.053 ********** 2026-04-05 07:27:23.093906 | orchestrator | 2026-04-05 07:27:23.093917 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 07:27:23.093945 | orchestrator | Sunday 05 April 2026 07:26:41 +0000 (0:00:00.449) 0:01:53.502 ********** 2026-04-05 07:27:23.093958 | orchestrator | 2026-04-05 07:27:23.093968 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 07:27:23.093979 | orchestrator | Sunday 05 April 2026 07:26:41 +0000 (0:00:00.459) 0:01:53.962 ********** 2026-04-05 07:27:23.093990 | orchestrator | 2026-04-05 07:27:23.094001 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 07:27:23.094015 | orchestrator | Sunday 05 April 2026 07:26:42 +0000 (0:00:00.435) 0:01:54.397 ********** 2026-04-05 07:27:23.094108 | orchestrator | 2026-04-05 07:27:23.094129 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 07:27:23.094149 | orchestrator | Sunday 05 April 2026 07:26:42 +0000 (0:00:00.434) 0:01:54.831 ********** 2026-04-05 07:27:23.094169 | orchestrator | 2026-04-05 07:27:23.094190 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-05 07:27:23.094211 | orchestrator | Sunday 05 April 2026 07:26:43 +0000 (0:00:00.452) 0:01:55.284 ********** 2026-04-05 07:27:23.094230 | orchestrator | 2026-04-05 07:27:23.094250 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-05 07:27:23.094270 | orchestrator | Sunday 05 April 2026 07:26:44 +0000 (0:00:00.816) 0:01:56.101 ********** 2026-04-05 07:27:23.094289 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:27:23.094312 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:27:23.094331 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:27:23.094351 | orchestrator | 2026-04-05 07:27:23.094371 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-05 07:27:23.094389 | orchestrator | Sunday 05 April 2026 07:26:57 +0000 (0:00:13.172) 0:02:09.273 ********** 2026-04-05 07:27:23.094409 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:27:23.094427 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:27:23.094446 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:27:23.094464 | orchestrator | 2026-04-05 07:27:23.094484 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-05 07:27:23.094531 | orchestrator | Sunday 05 April 2026 07:27:09 +0000 (0:00:12.394) 0:02:21.668 ********** 2026-04-05 07:27:23.094550 | orchestrator | changed: [testbed-node-4] 2026-04-05 07:27:23.094570 | orchestrator | changed: [testbed-node-3] 2026-04-05 07:27:23.094590 | orchestrator | changed: [testbed-node-5] 2026-04-05 07:27:23.094610 | orchestrator | 2026-04-05 07:27:23.094631 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:27:23.094652 | orchestrator | testbed-node-0 : ok=26  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-05 07:27:23.094673 | orchestrator | testbed-node-1 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 07:27:23.094692 | orchestrator | testbed-node-2 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 07:27:23.094711 | orchestrator | testbed-node-3 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 07:27:23.094750 | orchestrator | testbed-node-4 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 07:27:23.094770 | orchestrator | testbed-node-5 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-05 07:27:23.094789 | orchestrator | 2026-04-05 07:27:23.094808 | orchestrator | 2026-04-05 07:27:23.094828 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:27:23.094849 | orchestrator | Sunday 05 April 2026 07:27:23 +0000 (0:00:13.464) 0:02:35.133 ********** 2026-04-05 07:27:23.094869 | orchestrator | =============================================================================== 2026-04-05 07:27:23.094889 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 13.46s 2026-04-05 07:27:23.094910 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 13.17s 2026-04-05 07:27:23.094930 | orchestrator | ceilometer : Restart ceilometer-central container ---------------------- 12.39s 2026-04-05 07:27:23.094950 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 9.48s 2026-04-05 07:27:23.094970 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 6.48s 2026-04-05 07:27:23.094990 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 4.13s 2026-04-05 07:27:23.095010 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 3.49s 2026-04-05 07:27:23.095031 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 3.47s 2026-04-05 07:27:23.095051 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 3.25s 2026-04-05 07:27:23.095071 | orchestrator | service-check-containers : ceilometer | Check containers ---------------- 3.24s 2026-04-05 07:27:23.095091 | orchestrator | ceilometer : Flush handlers --------------------------------------------- 3.05s 2026-04-05 07:27:23.095111 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 3.02s 2026-04-05 07:27:23.095132 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 2.96s 2026-04-05 07:27:23.095151 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.96s 2026-04-05 07:27:23.095171 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 2.90s 2026-04-05 07:27:23.095188 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 2.88s 2026-04-05 07:27:23.095223 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 2.88s 2026-04-05 07:27:23.497028 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 2.74s 2026-04-05 07:27:23.497124 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.69s 2026-04-05 07:27:23.497139 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 2.63s 2026-04-05 07:27:23.683109 | orchestrator | + osism apply -a upgrade aodh 2026-04-05 07:27:24.996809 | orchestrator | 2026-04-05 07:27:24 | INFO  | Prepare task for execution of aodh. 2026-04-05 07:27:25.068400 | orchestrator | 2026-04-05 07:27:25 | INFO  | Task ab031150-08a0-4d64-a833-f6abe0578bce (aodh) was prepared for execution. 2026-04-05 07:27:25.068525 | orchestrator | 2026-04-05 07:27:25 | INFO  | It takes a moment until task ab031150-08a0-4d64-a833-f6abe0578bce (aodh) has been started and output is visible here. 2026-04-05 07:27:39.231035 | orchestrator | 2026-04-05 07:27:39.231141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:27:39.231157 | orchestrator | 2026-04-05 07:27:39.231168 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:27:39.231178 | orchestrator | Sunday 05 April 2026 07:27:30 +0000 (0:00:01.841) 0:00:01.841 ********** 2026-04-05 07:27:39.231188 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:27:39.231199 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:27:39.231209 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:27:39.231241 | orchestrator | 2026-04-05 07:27:39.231252 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:27:39.231261 | orchestrator | Sunday 05 April 2026 07:27:32 +0000 (0:00:02.003) 0:00:03.845 ********** 2026-04-05 07:27:39.231271 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-05 07:27:39.231281 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-05 07:27:39.231291 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-05 07:27:39.231300 | orchestrator | 2026-04-05 07:27:39.231309 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-05 07:27:39.231319 | orchestrator | 2026-04-05 07:27:39.231329 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-05 07:27:39.231338 | orchestrator | Sunday 05 April 2026 07:27:33 +0000 (0:00:01.630) 0:00:05.475 ********** 2026-04-05 07:27:39.231348 | orchestrator | included: /ansible/roles/aodh/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:27:39.231359 | orchestrator | 2026-04-05 07:27:39.231368 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-05 07:27:39.231378 | orchestrator | Sunday 05 April 2026 07:27:36 +0000 (0:00:02.859) 0:00:08.334 ********** 2026-04-05 07:27:39.231391 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:39.231408 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:39.231435 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:39.231455 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:39.231526 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:39.231539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:39.231549 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:39.231559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:39.231569 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:39.231586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:43.923733 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:43.923836 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:43.923852 | orchestrator | 2026-04-05 07:27:43.923866 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-05 07:27:43.923880 | orchestrator | Sunday 05 April 2026 07:27:40 +0000 (0:00:03.890) 0:00:12.225 ********** 2026-04-05 07:27:43.923892 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:27:43.923904 | orchestrator | 2026-04-05 07:27:43.923915 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-05 07:27:43.923926 | orchestrator | Sunday 05 April 2026 07:27:41 +0000 (0:00:01.126) 0:00:13.352 ********** 2026-04-05 07:27:43.923937 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:27:43.923947 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:27:43.923958 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:27:43.923968 | orchestrator | 2026-04-05 07:27:43.923979 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-05 07:27:43.923990 | orchestrator | Sunday 05 April 2026 07:27:43 +0000 (0:00:01.370) 0:00:14.723 ********** 2026-04-05 07:27:43.924002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:43.924018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:43.924056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:43.924089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:43.924103 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:27:43.924115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:43.924128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:43.924140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:43.924159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:43.924172 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:27:43.924192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:49.851453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:49.851621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:49.851639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:49.851652 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:27:49.851666 | orchestrator | 2026-04-05 07:27:49.851679 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-05 07:27:49.851691 | orchestrator | Sunday 05 April 2026 07:27:45 +0000 (0:00:01.865) 0:00:16.589 ********** 2026-04-05 07:27:49.851703 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:27:49.851741 | orchestrator | 2026-04-05 07:27:49.851752 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-05 07:27:49.851763 | orchestrator | Sunday 05 April 2026 07:27:46 +0000 (0:00:01.881) 0:00:18.470 ********** 2026-04-05 07:27:49.851775 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:49.851808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:49.851822 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:49.851834 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:49.851846 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:49.851866 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:49.851878 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:49.851896 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:52.947543 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:52.947655 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:52.947682 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:52.947734 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:52.947752 | orchestrator | 2026-04-05 07:27:52.947773 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-05 07:27:52.947792 | orchestrator | Sunday 05 April 2026 07:27:52 +0000 (0:00:05.134) 0:00:23.605 ********** 2026-04-05 07:27:52.947816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:52.947865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:52.947885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:52.947898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:52.947919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:52.947931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:52.947942 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:27:52.947955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:52.947977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:54.999825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:54.999958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:55.000008 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:27:55.000024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:55.000036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:55.000046 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:27:55.000056 | orchestrator | 2026-04-05 07:27:55.000067 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-05 07:27:55.000078 | orchestrator | Sunday 05 April 2026 07:27:54 +0000 (0:00:02.268) 0:00:25.873 ********** 2026-04-05 07:27:55.000089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:55.000123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:55.000135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:55.000153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:55.000164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:55.000174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:27:55.000185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:55.000195 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:27:55.000212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:59.745809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:27:59.745918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:59.745944 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:27:59.745967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:27:59.745989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:27:59.746009 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:27:59.746102 | orchestrator | 2026-04-05 07:27:59.746115 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-05 07:27:59.746128 | orchestrator | Sunday 05 April 2026 07:27:56 +0000 (0:00:02.085) 0:00:27.959 ********** 2026-04-05 07:27:59.746141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:59.746204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:59.746219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:27:59.746231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:59.746243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:59.746255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:27:59.746273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:27:59.746292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:08.291682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:08.291812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:08.291831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:08.291843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:08.291855 | orchestrator | 2026-04-05 07:28:08.291869 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-05 07:28:08.291883 | orchestrator | Sunday 05 April 2026 07:28:01 +0000 (0:00:05.482) 0:00:33.441 ********** 2026-04-05 07:28:08.291959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:28:08.291997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:28:08.292010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:28:08.292022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:28:08.292034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:28:08.292053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:28:08.292064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:08.292083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:17.529966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:17.530178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:17.530211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:17.530232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:17.530289 | orchestrator | 2026-04-05 07:28:17.530313 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-05 07:28:17.530327 | orchestrator | Sunday 05 April 2026 07:28:11 +0000 (0:00:09.743) 0:00:43.184 ********** 2026-04-05 07:28:17.530338 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:28:17.530350 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:28:17.530360 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:28:17.530371 | orchestrator | 2026-04-05 07:28:17.530383 | orchestrator | TASK [service-check-containers : aodh | Check containers] ********************** 2026-04-05 07:28:17.530402 | orchestrator | Sunday 05 April 2026 07:28:14 +0000 (0:00:02.917) 0:00:46.102 ********** 2026-04-05 07:28:17.530447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:28:17.530501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:28:17.530524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:28:17.530558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:28:17.530580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:28:17.530601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-05 07:28:17.530633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:21.621841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:21.621930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:21.621942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:21.621979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:21.621996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-05 07:28:21.622011 | orchestrator | 2026-04-05 07:28:21.622071 | orchestrator | TASK [service-check-containers : aodh | Notify handlers to restart containers] *** 2026-04-05 07:28:21.622081 | orchestrator | Sunday 05 April 2026 07:28:19 +0000 (0:00:04.949) 0:00:51.052 ********** 2026-04-05 07:28:21.622090 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:28:21.622099 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:28:21.622107 | orchestrator | } 2026-04-05 07:28:21.622115 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:28:21.622123 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:28:21.622130 | orchestrator | } 2026-04-05 07:28:21.622138 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:28:21.622146 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:28:21.622153 | orchestrator | } 2026-04-05 07:28:21.622161 | orchestrator | 2026-04-05 07:28:21.622169 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:28:21.622177 | orchestrator | Sunday 05 April 2026 07:28:21 +0000 (0:00:01.676) 0:00:52.729 ********** 2026-04-05 07:28:21.622201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:28:21.622214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:28:21.622235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:28:21.622243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:28:21.622251 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:28:21.622260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:28:21.622268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:28:21.622283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:29:39.360787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:29:39.360891 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:29:39.360906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:29:39.360917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-05 07:29:39.360925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-05 07:29:39.360933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-05 07:29:39.360939 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:29:39.360946 | orchestrator | 2026-04-05 07:29:39.360953 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-05 07:29:39.360961 | orchestrator | Sunday 05 April 2026 07:28:23 +0000 (0:00:02.057) 0:00:54.786 ********** 2026-04-05 07:29:39.360968 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:29:39.360996 | orchestrator | 2026-04-05 07:29:39.361003 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-05 07:29:39.361009 | orchestrator | Sunday 05 April 2026 07:28:39 +0000 (0:00:16.310) 0:01:11.097 ********** 2026-04-05 07:29:39.361015 | orchestrator | 2026-04-05 07:29:39.361021 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-05 07:29:39.361028 | orchestrator | Sunday 05 April 2026 07:28:40 +0000 (0:00:00.484) 0:01:11.582 ********** 2026-04-05 07:29:39.361032 | orchestrator | 2026-04-05 07:29:39.361048 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-05 07:29:39.361052 | orchestrator | Sunday 05 April 2026 07:28:40 +0000 (0:00:00.459) 0:01:12.042 ********** 2026-04-05 07:29:39.361056 | orchestrator | 2026-04-05 07:29:39.361060 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-05 07:29:39.361063 | orchestrator | Sunday 05 April 2026 07:28:41 +0000 (0:00:00.945) 0:01:12.987 ********** 2026-04-05 07:29:39.361067 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:29:39.361071 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:29:39.361075 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:29:39.361078 | orchestrator | 2026-04-05 07:29:39.361082 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-05 07:29:39.361086 | orchestrator | Sunday 05 April 2026 07:28:54 +0000 (0:00:13.211) 0:01:26.199 ********** 2026-04-05 07:29:39.361090 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:29:39.361093 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:29:39.361097 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:29:39.361101 | orchestrator | 2026-04-05 07:29:39.361104 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-05 07:29:39.361108 | orchestrator | Sunday 05 April 2026 07:29:07 +0000 (0:00:12.925) 0:01:39.125 ********** 2026-04-05 07:29:39.361112 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:29:39.361116 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:29:39.361119 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:29:39.361123 | orchestrator | 2026-04-05 07:29:39.361127 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-05 07:29:39.361130 | orchestrator | Sunday 05 April 2026 07:29:20 +0000 (0:00:12.895) 0:01:52.020 ********** 2026-04-05 07:29:39.361134 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:29:39.361138 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:29:39.361142 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:29:39.361145 | orchestrator | 2026-04-05 07:29:39.361149 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:29:39.361154 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:29:39.361160 | orchestrator | testbed-node-1 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:29:39.361163 | orchestrator | testbed-node-2 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:29:39.361167 | orchestrator | 2026-04-05 07:29:39.361171 | orchestrator | 2026-04-05 07:29:39.361174 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:29:39.361178 | orchestrator | Sunday 05 April 2026 07:29:39 +0000 (0:00:18.589) 0:02:10.610 ********** 2026-04-05 07:29:39.361182 | orchestrator | =============================================================================== 2026-04-05 07:29:39.361185 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 18.59s 2026-04-05 07:29:39.361189 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 16.31s 2026-04-05 07:29:39.361193 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 13.21s 2026-04-05 07:29:39.361196 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 12.93s 2026-04-05 07:29:39.361207 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 12.90s 2026-04-05 07:29:39.361211 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.74s 2026-04-05 07:29:39.361215 | orchestrator | aodh : Copying over config.json files for services ---------------------- 5.48s 2026-04-05 07:29:39.361219 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 5.13s 2026-04-05 07:29:39.361222 | orchestrator | service-check-containers : aodh | Check containers ---------------------- 4.95s 2026-04-05 07:29:39.361226 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 3.89s 2026-04-05 07:29:39.361230 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 2.92s 2026-04-05 07:29:39.361233 | orchestrator | aodh : include_tasks ---------------------------------------------------- 2.86s 2026-04-05 07:29:39.361237 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS certificate --- 2.27s 2026-04-05 07:29:39.361241 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 2.08s 2026-04-05 07:29:39.361245 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.06s 2026-04-05 07:29:39.361248 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.00s 2026-04-05 07:29:39.361252 | orchestrator | aodh : Flush handlers --------------------------------------------------- 1.89s 2026-04-05 07:29:39.361256 | orchestrator | aodh : include_tasks ---------------------------------------------------- 1.88s 2026-04-05 07:29:39.361259 | orchestrator | aodh : Copying over existing policy file -------------------------------- 1.86s 2026-04-05 07:29:39.361263 | orchestrator | service-check-containers : aodh | Notify handlers to restart containers --- 1.68s 2026-04-05 07:29:39.555712 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-05 07:29:39.611167 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 07:29:39.611261 | orchestrator | + osism apply -a bootstrap octavia 2026-04-05 07:29:40.913986 | orchestrator | 2026-04-05 07:29:40 | INFO  | Prepare task for execution of octavia. 2026-04-05 07:29:40.982046 | orchestrator | 2026-04-05 07:29:40 | INFO  | Task d32d1f4f-c860-4913-9c32-0f2c1412f733 (octavia) was prepared for execution. 2026-04-05 07:29:40.982111 | orchestrator | 2026-04-05 07:29:40 | INFO  | It takes a moment until task d32d1f4f-c860-4913-9c32-0f2c1412f733 (octavia) has been started and output is visible here. 2026-04-05 07:30:28.019248 | orchestrator | 2026-04-05 07:30:28.019412 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:30:28.019432 | orchestrator | 2026-04-05 07:30:28.019444 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:30:28.019456 | orchestrator | Sunday 05 April 2026 07:29:46 +0000 (0:00:02.242) 0:00:02.242 ********** 2026-04-05 07:30:28.019467 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:30:28.019479 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:30:28.019490 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:30:28.019501 | orchestrator | 2026-04-05 07:30:28.019512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:30:28.019523 | orchestrator | Sunday 05 April 2026 07:29:48 +0000 (0:00:01.729) 0:00:03.972 ********** 2026-04-05 07:30:28.019534 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-05 07:30:28.019545 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-05 07:30:28.019556 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-05 07:30:28.019566 | orchestrator | 2026-04-05 07:30:28.019577 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-05 07:30:28.019588 | orchestrator | 2026-04-05 07:30:28.019599 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 07:30:28.019610 | orchestrator | Sunday 05 April 2026 07:29:50 +0000 (0:00:01.859) 0:00:05.831 ********** 2026-04-05 07:30:28.019621 | orchestrator | included: /ansible/roles/octavia/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:30:28.019632 | orchestrator | 2026-04-05 07:30:28.019644 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-05 07:30:28.019680 | orchestrator | Sunday 05 April 2026 07:29:53 +0000 (0:00:03.318) 0:00:09.149 ********** 2026-04-05 07:30:28.019692 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:30:28.019703 | orchestrator | 2026-04-05 07:30:28.019714 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-05 07:30:28.019725 | orchestrator | Sunday 05 April 2026 07:29:57 +0000 (0:00:03.559) 0:00:12.709 ********** 2026-04-05 07:30:28.019735 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:30:28.019746 | orchestrator | 2026-04-05 07:30:28.019757 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-05 07:30:28.019768 | orchestrator | Sunday 05 April 2026 07:30:00 +0000 (0:00:03.035) 0:00:15.744 ********** 2026-04-05 07:30:28.019779 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:30:28.019793 | orchestrator | 2026-04-05 07:30:28.019806 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-05 07:30:28.019819 | orchestrator | Sunday 05 April 2026 07:30:03 +0000 (0:00:03.185) 0:00:18.929 ********** 2026-04-05 07:30:28.019833 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:30:28.019845 | orchestrator | 2026-04-05 07:30:28.019858 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-05 07:30:28.019870 | orchestrator | Sunday 05 April 2026 07:30:06 +0000 (0:00:03.527) 0:00:22.456 ********** 2026-04-05 07:30:28.019884 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:30:28.019898 | orchestrator | 2026-04-05 07:30:28.019911 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:30:28.019926 | orchestrator | testbed-node-0 : ok=8  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 07:30:28.019940 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 07:30:28.019955 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 07:30:28.019968 | orchestrator | 2026-04-05 07:30:28.019982 | orchestrator | 2026-04-05 07:30:28.019995 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:30:28.020010 | orchestrator | Sunday 05 April 2026 07:30:27 +0000 (0:00:20.879) 0:00:43.336 ********** 2026-04-05 07:30:28.020022 | orchestrator | =============================================================================== 2026-04-05 07:30:28.020036 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.88s 2026-04-05 07:30:28.020048 | orchestrator | octavia : Creating Octavia database ------------------------------------- 3.56s 2026-04-05 07:30:28.020061 | orchestrator | octavia : Creating Octavia persistence database user and setting permissions --- 3.53s 2026-04-05 07:30:28.020074 | orchestrator | octavia : include_tasks ------------------------------------------------- 3.32s 2026-04-05 07:30:28.020088 | orchestrator | octavia : Creating Octavia database user and setting permissions -------- 3.18s 2026-04-05 07:30:28.020101 | orchestrator | octavia : Creating Octavia persistence database ------------------------- 3.04s 2026-04-05 07:30:28.020114 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.86s 2026-04-05 07:30:28.020128 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.73s 2026-04-05 07:30:28.208815 | orchestrator | + osism apply -a upgrade octavia 2026-04-05 07:30:29.493425 | orchestrator | 2026-04-05 07:30:29 | INFO  | Prepare task for execution of octavia. 2026-04-05 07:30:29.572010 | orchestrator | 2026-04-05 07:30:29 | INFO  | Task 5196ecf8-a9f3-4b26-ba77-e8de38ebbfcb (octavia) was prepared for execution. 2026-04-05 07:30:29.572103 | orchestrator | 2026-04-05 07:30:29 | INFO  | It takes a moment until task 5196ecf8-a9f3-4b26-ba77-e8de38ebbfcb (octavia) has been started and output is visible here. 2026-04-05 07:31:09.364433 | orchestrator | 2026-04-05 07:31:09.364555 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:31:09.364597 | orchestrator | 2026-04-05 07:31:09.364610 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:31:09.364621 | orchestrator | Sunday 05 April 2026 07:30:34 +0000 (0:00:01.726) 0:00:01.726 ********** 2026-04-05 07:31:09.364632 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:31:09.364644 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:31:09.364655 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:31:09.364665 | orchestrator | 2026-04-05 07:31:09.364677 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:31:09.364688 | orchestrator | Sunday 05 April 2026 07:30:36 +0000 (0:00:01.788) 0:00:03.514 ********** 2026-04-05 07:31:09.364699 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-05 07:31:09.364710 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-05 07:31:09.364721 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-05 07:31:09.364732 | orchestrator | 2026-04-05 07:31:09.364743 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-05 07:31:09.364753 | orchestrator | 2026-04-05 07:31:09.364764 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 07:31:09.364775 | orchestrator | Sunday 05 April 2026 07:30:39 +0000 (0:00:02.713) 0:00:06.228 ********** 2026-04-05 07:31:09.364786 | orchestrator | included: /ansible/roles/octavia/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:31:09.364798 | orchestrator | 2026-04-05 07:31:09.364809 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 07:31:09.364820 | orchestrator | Sunday 05 April 2026 07:30:42 +0000 (0:00:03.319) 0:00:09.548 ********** 2026-04-05 07:31:09.364831 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:31:09.364842 | orchestrator | 2026-04-05 07:31:09.364853 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-05 07:31:09.364863 | orchestrator | Sunday 05 April 2026 07:30:44 +0000 (0:00:01.778) 0:00:11.327 ********** 2026-04-05 07:31:09.364874 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:31:09.364885 | orchestrator | 2026-04-05 07:31:09.364896 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-05 07:31:09.364906 | orchestrator | Sunday 05 April 2026 07:30:49 +0000 (0:00:05.263) 0:00:16.591 ********** 2026-04-05 07:31:09.364918 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:31:09.364928 | orchestrator | 2026-04-05 07:31:09.364939 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-05 07:31:09.364950 | orchestrator | Sunday 05 April 2026 07:30:53 +0000 (0:00:04.190) 0:00:20.782 ********** 2026-04-05 07:31:09.364970 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-05 07:31:09.364988 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-05 07:31:09.365008 | orchestrator | 2026-04-05 07:31:09.365027 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-05 07:31:09.365048 | orchestrator | Sunday 05 April 2026 07:31:01 +0000 (0:00:08.073) 0:00:28.856 ********** 2026-04-05 07:31:09.365067 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:31:09.365088 | orchestrator | 2026-04-05 07:31:09.365102 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-05 07:31:09.365114 | orchestrator | Sunday 05 April 2026 07:31:06 +0000 (0:00:04.343) 0:00:33.199 ********** 2026-04-05 07:31:09.365127 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:31:09.365139 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:31:09.365152 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:31:09.365164 | orchestrator | 2026-04-05 07:31:09.365176 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-05 07:31:09.365189 | orchestrator | Sunday 05 April 2026 07:31:07 +0000 (0:00:01.395) 0:00:34.595 ********** 2026-04-05 07:31:09.365205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:09.365284 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:09.365302 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:09.365318 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:09.365330 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:09.365350 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:09.365362 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:09.365383 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882512 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882535 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882556 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882602 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:13.882683 | orchestrator | 2026-04-05 07:31:13.882696 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-05 07:31:13.882708 | orchestrator | Sunday 05 April 2026 07:31:11 +0000 (0:00:03.795) 0:00:38.391 ********** 2026-04-05 07:31:13.882719 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:31:13.882730 | orchestrator | 2026-04-05 07:31:13.882741 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-05 07:31:13.882751 | orchestrator | Sunday 05 April 2026 07:31:12 +0000 (0:00:00.891) 0:00:39.282 ********** 2026-04-05 07:31:13.882762 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:31:13.882772 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:31:13.882783 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:31:13.882793 | orchestrator | 2026-04-05 07:31:13.882804 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-05 07:31:13.882815 | orchestrator | Sunday 05 April 2026 07:31:13 +0000 (0:00:01.334) 0:00:40.616 ********** 2026-04-05 07:31:13.882827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:13.882849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:13.882861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:13.882873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:13.882892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:18.570179 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:31:18.570329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:18.570348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:18.570381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:18.570389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:18.570396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:18.570404 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:31:18.570426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:18.570435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:18.570447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:18.570453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:18.570459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:18.570465 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:31:18.570471 | orchestrator | 2026-04-05 07:31:18.570478 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-05 07:31:18.570487 | orchestrator | Sunday 05 April 2026 07:31:15 +0000 (0:00:01.739) 0:00:42.356 ********** 2026-04-05 07:31:18.570494 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:31:18.570501 | orchestrator | 2026-04-05 07:31:18.570508 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-05 07:31:18.570515 | orchestrator | Sunday 05 April 2026 07:31:17 +0000 (0:00:01.757) 0:00:44.113 ********** 2026-04-05 07:31:18.570527 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:22.003452 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:22.003590 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:22.003607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:22.003636 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:22.003648 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:22.003678 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:22.003692 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:22.003711 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:22.003723 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:22.003735 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:22.003747 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:22.003759 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:22.003779 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:23.823747 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:23.823843 | orchestrator | 2026-04-05 07:31:23.823869 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-05 07:31:23.823890 | orchestrator | Sunday 05 April 2026 07:31:23 +0000 (0:00:06.196) 0:00:50.310 ********** 2026-04-05 07:31:23.823913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:23.823940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:23.823961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:23.823981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:23.824055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:23.824077 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:31:23.824098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:23.824120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:23.824140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:23.824160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:23.824172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:23.824192 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:31:23.824214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:25.556308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:25.556411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:25.556427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:25.556440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:25.556480 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:31:25.556494 | orchestrator | 2026-04-05 07:31:25.556506 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-05 07:31:25.556518 | orchestrator | Sunday 05 April 2026 07:31:24 +0000 (0:00:01.758) 0:00:52.069 ********** 2026-04-05 07:31:25.556530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:25.556563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:25.556576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:25.556588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:25.556599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:25.556611 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:31:25.556630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:25.556643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:25.556662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:29.228302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:29.228414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:29.228434 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:31:29.228449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:31:29.228494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:31:29.228509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:31:29.228542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:31:29.228555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:31:29.228568 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:31:29.228580 | orchestrator | 2026-04-05 07:31:29.228594 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-05 07:31:29.228606 | orchestrator | Sunday 05 April 2026 07:31:26 +0000 (0:00:01.798) 0:00:53.867 ********** 2026-04-05 07:31:29.228618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:29.228641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:29.228654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:29.228677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:39.683679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:39.683794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:39.683836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:39.683994 | orchestrator | 2026-04-05 07:31:39.684016 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-05 07:31:39.684038 | orchestrator | Sunday 05 April 2026 07:31:33 +0000 (0:00:06.606) 0:01:00.474 ********** 2026-04-05 07:31:39.684057 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 07:31:39.684077 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 07:31:39.684096 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-05 07:31:39.684115 | orchestrator | 2026-04-05 07:31:39.684134 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-05 07:31:39.684155 | orchestrator | Sunday 05 April 2026 07:31:36 +0000 (0:00:02.721) 0:01:03.195 ********** 2026-04-05 07:31:39.684188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:53.526726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:53.526900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:31:53.526923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:53.526937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:53.526949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:31:53.526980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:53.527003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:53.527014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:53.527026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:53.527037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:53.527049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:31:53.527060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:31:53.527087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:32:19.122326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:32:19.122440 | orchestrator | 2026-04-05 07:32:19.122457 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-05 07:32:19.122471 | orchestrator | Sunday 05 April 2026 07:31:54 +0000 (0:00:18.673) 0:01:21.869 ********** 2026-04-05 07:32:19.122482 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:32:19.122494 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:32:19.122505 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:32:19.122515 | orchestrator | 2026-04-05 07:32:19.122526 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-05 07:32:19.122537 | orchestrator | Sunday 05 April 2026 07:31:57 +0000 (0:00:02.991) 0:01:24.861 ********** 2026-04-05 07:32:19.122548 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.122559 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.122569 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.122580 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.122591 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.122601 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.122612 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.122622 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.122633 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.122644 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 07:32:19.122655 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 07:32:19.122666 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 07:32:19.122676 | orchestrator | 2026-04-05 07:32:19.122687 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-05 07:32:19.122697 | orchestrator | Sunday 05 April 2026 07:32:03 +0000 (0:00:06.087) 0:01:30.949 ********** 2026-04-05 07:32:19.122711 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.122730 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.122762 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.122779 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.122797 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.122814 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.122830 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.122848 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.122899 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.122919 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 07:32:19.122938 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 07:32:19.122958 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 07:32:19.122976 | orchestrator | 2026-04-05 07:32:19.122994 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-05 07:32:19.123005 | orchestrator | Sunday 05 April 2026 07:32:10 +0000 (0:00:06.234) 0:01:37.184 ********** 2026-04-05 07:32:19.123016 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.123026 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.123037 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-05 07:32:19.123048 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.123058 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.123069 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-05 07:32:19.123079 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.123090 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.123101 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-05 07:32:19.123111 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-05 07:32:19.123122 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-05 07:32:19.123132 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-05 07:32:19.123144 | orchestrator | 2026-04-05 07:32:19.123155 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-05 07:32:19.123166 | orchestrator | Sunday 05 April 2026 07:32:16 +0000 (0:00:06.442) 0:01:43.626 ********** 2026-04-05 07:32:19.123229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:32:19.123249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:32:19.123262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-05 07:32:19.123285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:32:19.123297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:32:19.123317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-05 07:32:25.251773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:32:25.251888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:32:25.251906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-05 07:32:25.251943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:32:25.251956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:32:25.251967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-05 07:32:25.251998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:32:25.252011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:32:25.252022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-05 07:32:25.252042 | orchestrator | 2026-04-05 07:32:25.252056 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-05 07:32:25.252069 | orchestrator | Sunday 05 April 2026 07:32:23 +0000 (0:00:06.600) 0:01:50.227 ********** 2026-04-05 07:32:25.252081 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:32:25.252093 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:32:25.252104 | orchestrator | } 2026-04-05 07:32:25.252115 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:32:25.252126 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:32:25.252137 | orchestrator | } 2026-04-05 07:32:25.252147 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:32:25.252158 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:32:25.252207 | orchestrator | } 2026-04-05 07:32:25.252231 | orchestrator | 2026-04-05 07:32:25.252250 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:32:25.252268 | orchestrator | Sunday 05 April 2026 07:32:24 +0000 (0:00:01.667) 0:01:51.896 ********** 2026-04-05 07:32:25.252282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:32:25.252297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:32:25.252321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:32:25.454251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:32:25.454377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:32:25.454410 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:32:25.454439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:32:25.454462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:32:25.454481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:32:25.454524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:32:25.454558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:32:25.454578 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:32:25.454598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-05 07:32:25.454619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-05 07:32:25.454640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-05 07:32:25.454653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-05 07:32:25.454672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-05 07:33:57.042965 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:33:57.043087 | orchestrator | 2026-04-05 07:33:57.043141 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-05 07:33:57.043155 | orchestrator | Sunday 05 April 2026 07:32:27 +0000 (0:00:02.358) 0:01:54.254 ********** 2026-04-05 07:33:57.043167 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:33:57.043179 | orchestrator | 2026-04-05 07:33:57.043190 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 07:33:57.043201 | orchestrator | Sunday 05 April 2026 07:32:39 +0000 (0:00:12.604) 0:02:06.859 ********** 2026-04-05 07:33:57.043212 | orchestrator | 2026-04-05 07:33:57.043223 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 07:33:57.043234 | orchestrator | Sunday 05 April 2026 07:32:40 +0000 (0:00:00.444) 0:02:07.303 ********** 2026-04-05 07:33:57.043245 | orchestrator | 2026-04-05 07:33:57.043256 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-05 07:33:57.043267 | orchestrator | Sunday 05 April 2026 07:32:40 +0000 (0:00:00.452) 0:02:07.756 ********** 2026-04-05 07:33:57.043278 | orchestrator | 2026-04-05 07:33:57.043289 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-05 07:33:57.043300 | orchestrator | Sunday 05 April 2026 07:32:41 +0000 (0:00:00.806) 0:02:08.563 ********** 2026-04-05 07:33:57.043310 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:33:57.043321 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:33:57.043332 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:33:57.043343 | orchestrator | 2026-04-05 07:33:57.043354 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-05 07:33:57.043364 | orchestrator | Sunday 05 April 2026 07:33:01 +0000 (0:00:19.849) 0:02:28.412 ********** 2026-04-05 07:33:57.043375 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:33:57.043386 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:33:57.043397 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:33:57.043408 | orchestrator | 2026-04-05 07:33:57.043419 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-05 07:33:57.043429 | orchestrator | Sunday 05 April 2026 07:33:15 +0000 (0:00:14.523) 0:02:42.936 ********** 2026-04-05 07:33:57.043440 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:33:57.043451 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:33:57.043479 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:33:57.043490 | orchestrator | 2026-04-05 07:33:57.043501 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-05 07:33:57.043513 | orchestrator | Sunday 05 April 2026 07:33:29 +0000 (0:00:13.397) 0:02:56.334 ********** 2026-04-05 07:33:57.043524 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:33:57.043535 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:33:57.043545 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:33:57.043556 | orchestrator | 2026-04-05 07:33:57.043567 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-05 07:33:57.043578 | orchestrator | Sunday 05 April 2026 07:33:42 +0000 (0:00:13.218) 0:03:09.552 ********** 2026-04-05 07:33:57.043589 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:33:57.043600 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:33:57.043610 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:33:57.043621 | orchestrator | 2026-04-05 07:33:57.043632 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:33:57.043644 | orchestrator | testbed-node-0 : ok=27  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:33:57.043657 | orchestrator | testbed-node-1 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:33:57.043694 | orchestrator | testbed-node-2 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:33:57.043706 | orchestrator | 2026-04-05 07:33:57.043716 | orchestrator | 2026-04-05 07:33:57.043727 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:33:57.043739 | orchestrator | Sunday 05 April 2026 07:33:56 +0000 (0:00:14.171) 0:03:23.724 ********** 2026-04-05 07:33:57.043749 | orchestrator | =============================================================================== 2026-04-05 07:33:57.043760 | orchestrator | octavia : Restart octavia-api container -------------------------------- 19.85s 2026-04-05 07:33:57.043770 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.67s 2026-04-05 07:33:57.043782 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 14.52s 2026-04-05 07:33:57.043792 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 14.17s 2026-04-05 07:33:57.043803 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 13.40s 2026-04-05 07:33:57.043814 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 13.22s 2026-04-05 07:33:57.043824 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 12.60s 2026-04-05 07:33:57.043835 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.07s 2026-04-05 07:33:57.043846 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.61s 2026-04-05 07:33:57.043856 | orchestrator | service-check-containers : octavia | Check containers ------------------- 6.60s 2026-04-05 07:33:57.043867 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.44s 2026-04-05 07:33:57.043878 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.23s 2026-04-05 07:33:57.043888 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.20s 2026-04-05 07:33:57.043899 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 6.09s 2026-04-05 07:33:57.043928 | orchestrator | octavia : Get amphora flavor info --------------------------------------- 5.26s 2026-04-05 07:33:57.043940 | orchestrator | octavia : Get loadbalancer management network --------------------------- 4.34s 2026-04-05 07:33:57.043950 | orchestrator | octavia : Get service project id ---------------------------------------- 4.19s 2026-04-05 07:33:57.043961 | orchestrator | octavia : Ensuring config directories exist ----------------------------- 3.80s 2026-04-05 07:33:57.043972 | orchestrator | octavia : include_tasks ------------------------------------------------- 3.32s 2026-04-05 07:33:57.043983 | orchestrator | octavia : Copying over Octavia SSH key ---------------------------------- 2.99s 2026-04-05 07:33:57.245172 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-05 07:33:57.245267 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/310-openstack-extended.sh 2026-04-05 07:33:58.525739 | orchestrator | 2026-04-05 07:33:58 | INFO  | Prepare task for execution of gnocchi. 2026-04-05 07:33:58.591341 | orchestrator | 2026-04-05 07:33:58 | INFO  | Task cdbc3591-dd0a-4bc2-85b6-effdf1fcca18 (gnocchi) was prepared for execution. 2026-04-05 07:33:58.591445 | orchestrator | 2026-04-05 07:33:58 | INFO  | It takes a moment until task cdbc3591-dd0a-4bc2-85b6-effdf1fcca18 (gnocchi) has been started and output is visible here. 2026-04-05 07:34:08.692831 | orchestrator | 2026-04-05 07:34:08.692937 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:34:08.692961 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 07:34:08.692980 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 07:34:08.693013 | orchestrator | 2026-04-05 07:34:08.693030 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:34:08.693046 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 07:34:08.693129 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 07:34:08.693167 | orchestrator | Sunday 05 April 2026 07:34:03 +0000 (0:00:01.135) 0:00:01.135 ********** 2026-04-05 07:34:08.693183 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:34:08.693201 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:34:08.693217 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:34:08.693233 | orchestrator | 2026-04-05 07:34:08.693249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:34:08.693265 | orchestrator | Sunday 05 April 2026 07:34:03 +0000 (0:00:00.911) 0:00:02.046 ********** 2026-04-05 07:34:08.693281 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-05 07:34:08.693297 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-05 07:34:08.693313 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-05 07:34:08.693331 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-05 07:34:08.693347 | orchestrator | 2026-04-05 07:34:08.693364 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-05 07:34:08.693382 | orchestrator | skipping: no hosts matched 2026-04-05 07:34:08.693400 | orchestrator | 2026-04-05 07:34:08.693418 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:34:08.693437 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 07:34:08.693456 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 07:34:08.693474 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-05 07:34:08.693494 | orchestrator | 2026-04-05 07:34:08.693512 | orchestrator | 2026-04-05 07:34:08.693530 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:34:08.693549 | orchestrator | Sunday 05 April 2026 07:34:08 +0000 (0:00:04.513) 0:00:06.560 ********** 2026-04-05 07:34:08.693567 | orchestrator | =============================================================================== 2026-04-05 07:34:08.693587 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.51s 2026-04-05 07:34:08.693606 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2026-04-05 07:34:09.956508 | orchestrator | 2026-04-05 07:34:09 | INFO  | Prepare task for execution of manila. 2026-04-05 07:34:10.025238 | orchestrator | 2026-04-05 07:34:10 | INFO  | Task e8945793-662a-48e2-8c01-c789e4649cd6 (manila) was prepared for execution. 2026-04-05 07:34:10.025319 | orchestrator | 2026-04-05 07:34:10 | INFO  | It takes a moment until task e8945793-662a-48e2-8c01-c789e4649cd6 (manila) has been started and output is visible here. 2026-04-05 07:34:25.474594 | orchestrator | 2026-04-05 07:34:25.474674 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:34:25.474682 | orchestrator | 2026-04-05 07:34:25.474687 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:34:25.474693 | orchestrator | Sunday 05 April 2026 07:34:14 +0000 (0:00:01.752) 0:00:01.752 ********** 2026-04-05 07:34:25.474698 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:34:25.474704 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:34:25.474709 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:34:25.474713 | orchestrator | 2026-04-05 07:34:25.474718 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:34:25.474723 | orchestrator | Sunday 05 April 2026 07:34:17 +0000 (0:00:02.286) 0:00:04.038 ********** 2026-04-05 07:34:25.474728 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-05 07:34:25.474733 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-05 07:34:25.474757 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-05 07:34:25.474762 | orchestrator | 2026-04-05 07:34:25.474767 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-05 07:34:25.474772 | orchestrator | 2026-04-05 07:34:25.474777 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-05 07:34:25.474781 | orchestrator | Sunday 05 April 2026 07:34:19 +0000 (0:00:02.470) 0:00:06.509 ********** 2026-04-05 07:34:25.474787 | orchestrator | included: /ansible/roles/manila/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:34:25.474793 | orchestrator | 2026-04-05 07:34:25.474797 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-05 07:34:25.474802 | orchestrator | Sunday 05 April 2026 07:34:23 +0000 (0:00:03.518) 0:00:10.028 ********** 2026-04-05 07:34:25.474810 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:25.474819 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:25.474825 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:25.474840 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:25.474851 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:25.474856 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:25.474861 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:25.474868 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:25.474873 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:25.474882 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:43.153505 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:43.153625 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:43.153643 | orchestrator | 2026-04-05 07:34:43.153656 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-05 07:34:43.153670 | orchestrator | Sunday 05 April 2026 07:34:26 +0000 (0:00:03.538) 0:00:13.567 ********** 2026-04-05 07:34:43.153681 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:34:43.153693 | orchestrator | 2026-04-05 07:34:43.153704 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-05 07:34:43.153715 | orchestrator | Sunday 05 April 2026 07:34:28 +0000 (0:00:01.900) 0:00:15.467 ********** 2026-04-05 07:34:43.153726 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:34:43.153738 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:34:43.153748 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:34:43.153759 | orchestrator | 2026-04-05 07:34:43.153770 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-05 07:34:43.153781 | orchestrator | Sunday 05 April 2026 07:34:30 +0000 (0:00:02.184) 0:00:17.652 ********** 2026-04-05 07:34:43.153792 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 07:34:43.153805 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 07:34:43.153816 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 07:34:43.153827 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 07:34:43.153838 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 07:34:43.153849 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 07:34:43.153884 | orchestrator | 2026-04-05 07:34:43.153896 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-05 07:34:43.153907 | orchestrator | Sunday 05 April 2026 07:34:33 +0000 (0:00:02.451) 0:00:20.103 ********** 2026-04-05 07:34:43.153918 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 07:34:43.153929 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 07:34:43.153940 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 07:34:43.153950 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 07:34:43.153979 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-05 07:34:43.153990 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-05 07:34:43.154001 | orchestrator | 2026-04-05 07:34:43.154159 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-05 07:34:43.154218 | orchestrator | Sunday 05 April 2026 07:34:35 +0000 (0:00:02.193) 0:00:22.297 ********** 2026-04-05 07:34:43.154243 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-05 07:34:43.154256 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-05 07:34:43.154269 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-05 07:34:43.154297 | orchestrator | 2026-04-05 07:34:43.154311 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-05 07:34:43.154335 | orchestrator | Sunday 05 April 2026 07:34:37 +0000 (0:00:01.917) 0:00:24.214 ********** 2026-04-05 07:34:43.154348 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:34:43.154361 | orchestrator | 2026-04-05 07:34:43.154374 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-05 07:34:43.154387 | orchestrator | Sunday 05 April 2026 07:34:38 +0000 (0:00:01.150) 0:00:25.365 ********** 2026-04-05 07:34:43.154399 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:34:43.154412 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:34:43.154423 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:34:43.154434 | orchestrator | 2026-04-05 07:34:43.154445 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-05 07:34:43.154456 | orchestrator | Sunday 05 April 2026 07:34:39 +0000 (0:00:01.352) 0:00:26.718 ********** 2026-04-05 07:34:43.154467 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:34:43.154477 | orchestrator | 2026-04-05 07:34:43.154489 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-05 07:34:43.154500 | orchestrator | Sunday 05 April 2026 07:34:41 +0000 (0:00:01.839) 0:00:28.557 ********** 2026-04-05 07:34:43.154514 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:43.154540 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:43.154563 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:47.291883 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.291964 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.291976 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.292010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.292025 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.292034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.292059 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.292142 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.292151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:47.292169 | orchestrator | 2026-04-05 07:34:47.292179 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-05 07:34:47.292190 | orchestrator | Sunday 05 April 2026 07:34:46 +0000 (0:00:04.986) 0:00:33.544 ********** 2026-04-05 07:34:47.292201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:34:47.292212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:34:47.292230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:34:49.475111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:34:49.475252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475269 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:34:49.475293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475316 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:34:49.475324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475340 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:34:49.475348 | orchestrator | 2026-04-05 07:34:49.475356 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-05 07:34:49.475365 | orchestrator | Sunday 05 April 2026 07:34:48 +0000 (0:00:02.204) 0:00:35.748 ********** 2026-04-05 07:34:49.475373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:34:49.475381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:34:49.475396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:34:52.736272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:34:52.736409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736422 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:34:52.736435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736522 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:34:52.736534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:34:52.736556 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:34:52.736567 | orchestrator | 2026-04-05 07:34:52.736580 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-05 07:34:52.736593 | orchestrator | Sunday 05 April 2026 07:34:51 +0000 (0:00:02.398) 0:00:38.147 ********** 2026-04-05 07:34:52.736606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:52.736635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:59.281587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:59.281696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:34:59.281854 | orchestrator | 2026-04-05 07:34:59.281875 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-05 07:34:59.281887 | orchestrator | Sunday 05 April 2026 07:34:56 +0000 (0:00:05.379) 0:00:43.526 ********** 2026-04-05 07:34:59.281899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:34:59.281919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:35:09.594655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:35:09.594753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:09.594769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:09.594804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:09.594817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:09.594848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:09.594862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:09.594874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:09.594887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:09.594906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:09.594919 | orchestrator | 2026-04-05 07:35:09.594932 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-05 07:35:09.594945 | orchestrator | Sunday 05 April 2026 07:35:04 +0000 (0:00:07.791) 0:00:51.318 ********** 2026-04-05 07:35:09.594956 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-05 07:35:09.594968 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-05 07:35:09.594980 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-05 07:35:09.594991 | orchestrator | 2026-04-05 07:35:09.595002 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-05 07:35:09.595013 | orchestrator | Sunday 05 April 2026 07:35:09 +0000 (0:00:04.624) 0:00:55.942 ********** 2026-04-05 07:35:09.595032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:35:12.558411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558548 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:35:12.558560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:35:12.558571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558617 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:35:12.558627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:35:12.558644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:35:12.558674 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:35:12.558684 | orchestrator | 2026-04-05 07:35:12.558695 | orchestrator | TASK [service-check-containers : manila | Check containers] ******************** 2026-04-05 07:35:12.558705 | orchestrator | Sunday 05 April 2026 07:35:11 +0000 (0:00:02.156) 0:00:58.099 ********** 2026-04-05 07:35:12.558723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:35:16.543792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:35:16.543896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:35:16.543911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.543922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.543932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.543979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.543991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.544000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.544010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.544019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.544028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-05 07:35:16.544087 | orchestrator | 2026-04-05 07:35:16.544106 | orchestrator | TASK [service-check-containers : manila | Notify handlers to restart containers] *** 2026-04-05 07:35:16.544119 | orchestrator | Sunday 05 April 2026 07:35:16 +0000 (0:00:04.929) 0:01:03.028 ********** 2026-04-05 07:35:16.544130 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:35:16.544139 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:35:16.544148 | orchestrator | } 2026-04-05 07:35:16.544156 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:35:16.544165 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:35:16.544173 | orchestrator | } 2026-04-05 07:35:16.544182 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:35:16.544197 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:35:18.616371 | orchestrator | } 2026-04-05 07:35:18.616441 | orchestrator | 2026-04-05 07:35:18.616447 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:35:18.616453 | orchestrator | Sunday 05 April 2026 07:35:17 +0000 (0:00:01.476) 0:01:04.505 ********** 2026-04-05 07:35:18.616460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:35:18.616467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:35:18.616473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:18.616478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:35:18.616500 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:35:18.616515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:35:18.616519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:35:18.616523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:35:18.616527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:35:18.616531 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:35:18.616535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:35:18.616543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-05 07:35:18.616551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-05 07:38:47.985327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-05 07:38:47.985445 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:38:47.985463 | orchestrator | 2026-04-05 07:38:47.985475 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-05 07:38:47.985487 | orchestrator | Sunday 05 April 2026 07:35:20 +0000 (0:00:02.555) 0:01:07.060 ********** 2026-04-05 07:38:47.985498 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:38:47.985509 | orchestrator | 2026-04-05 07:38:47.985520 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-05 07:38:47.985531 | orchestrator | Sunday 05 April 2026 07:35:39 +0000 (0:00:19.293) 0:01:26.354 ********** 2026-04-05 07:38:47.985542 | orchestrator | 2026-04-05 07:38:47.985553 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-05 07:38:47.985564 | orchestrator | Sunday 05 April 2026 07:35:40 +0000 (0:00:00.472) 0:01:26.827 ********** 2026-04-05 07:38:47.985575 | orchestrator | 2026-04-05 07:38:47.985585 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-05 07:38:47.985596 | orchestrator | Sunday 05 April 2026 07:35:40 +0000 (0:00:00.531) 0:01:27.359 ********** 2026-04-05 07:38:47.985607 | orchestrator | 2026-04-05 07:38:47.985617 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-05 07:38:47.985628 | orchestrator | Sunday 05 April 2026 07:35:41 +0000 (0:00:00.791) 0:01:28.150 ********** 2026-04-05 07:38:47.985639 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:38:47.985650 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:38:47.985660 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:38:47.985672 | orchestrator | 2026-04-05 07:38:47.985683 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-05 07:38:47.985694 | orchestrator | Sunday 05 April 2026 07:35:59 +0000 (0:00:17.772) 0:01:45.923 ********** 2026-04-05 07:38:47.985705 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:38:47.985716 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:38:47.985726 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:38:47.985762 | orchestrator | 2026-04-05 07:38:47.985774 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-05 07:38:47.985785 | orchestrator | Sunday 05 April 2026 07:36:12 +0000 (0:00:13.733) 0:01:59.657 ********** 2026-04-05 07:38:47.985795 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:38:47.985820 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:38:47.985831 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:38:47.985843 | orchestrator | 2026-04-05 07:38:47.985855 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-05 07:38:47.985869 | orchestrator | Sunday 05 April 2026 07:36:25 +0000 (0:00:13.071) 0:02:12.729 ********** 2026-04-05 07:38:47.985881 | orchestrator | 2026-04-05 07:38:47.985894 | orchestrator | STILL ALIVE [task 'manila : Restart manila-share container' is running] ******** 2026-04-05 07:38:47.985906 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:38:47.985954 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:38:47.985966 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:38:47.985978 | orchestrator | 2026-04-05 07:38:47.985991 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:38:47.986005 | orchestrator | testbed-node-0 : ok=21  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:38:47.986080 | orchestrator | testbed-node-1 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:38:47.986096 | orchestrator | testbed-node-2 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:38:47.986109 | orchestrator | 2026-04-05 07:38:47.986121 | orchestrator | 2026-04-05 07:38:47.986133 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:38:47.986147 | orchestrator | Sunday 05 April 2026 07:38:47 +0000 (0:02:21.617) 0:04:34.347 ********** 2026-04-05 07:38:47.986160 | orchestrator | =============================================================================== 2026-04-05 07:38:47.986170 | orchestrator | manila : Restart manila-share container ------------------------------- 141.62s 2026-04-05 07:38:47.986181 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 19.29s 2026-04-05 07:38:47.986192 | orchestrator | manila : Restart manila-api container ---------------------------------- 17.77s 2026-04-05 07:38:47.986202 | orchestrator | manila : Restart manila-data container --------------------------------- 13.73s 2026-04-05 07:38:47.986213 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 13.07s 2026-04-05 07:38:47.986224 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.79s 2026-04-05 07:38:47.986234 | orchestrator | manila : Copying over config.json files for services -------------------- 5.38s 2026-04-05 07:38:47.986245 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.99s 2026-04-05 07:38:47.986255 | orchestrator | service-check-containers : manila | Check containers -------------------- 4.93s 2026-04-05 07:38:47.986284 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.63s 2026-04-05 07:38:47.986295 | orchestrator | manila : Ensuring config directories exist ------------------------------ 3.54s 2026-04-05 07:38:47.986306 | orchestrator | manila : include_tasks -------------------------------------------------- 3.52s 2026-04-05 07:38:47.986317 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.56s 2026-04-05 07:38:47.986327 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.47s 2026-04-05 07:38:47.986338 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 2.45s 2026-04-05 07:38:47.986348 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS key ------ 2.40s 2026-04-05 07:38:47.986359 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.29s 2026-04-05 07:38:47.986369 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS certificate --- 2.20s 2026-04-05 07:38:47.986390 | orchestrator | manila : Copy over ceph Manila keyrings --------------------------------- 2.19s 2026-04-05 07:38:47.986401 | orchestrator | manila : Ensuring manila service ceph config subdir exists -------------- 2.18s 2026-04-05 07:38:48.184906 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-05 07:38:48.185029 | orchestrator | + osism migrate rabbitmq3to4 delete 2026-04-05 07:38:54.508113 | orchestrator | 2026-04-05 07:38:54 | ERROR  | Unable to get ansible vault password 2026-04-05 07:38:54.508223 | orchestrator | 2026-04-05 07:38:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 07:38:54.508240 | orchestrator | 2026-04-05 07:38:54 | ERROR  | Dropping encrypted entries 2026-04-05 07:38:54.541249 | orchestrator | 2026-04-05 07:38:54 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 07:38:54.826001 | orchestrator | 2026-04-05 07:38:54 | INFO  | Found 128 classic queue(s) in vhost '/' 2026-04-05 07:38:54.877983 | orchestrator | 2026-04-05 07:38:54 | INFO  | Deleted queue: alarm.all.sample 2026-04-05 07:38:54.927175 | orchestrator | 2026-04-05 07:38:54 | INFO  | Deleted queue: alarming.sample 2026-04-05 07:38:54.964585 | orchestrator | 2026-04-05 07:38:54 | INFO  | Deleted queue: barbican.workers 2026-04-05 07:38:55.016938 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: barbican.workers.barbican.queue 2026-04-05 07:38:55.051730 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: barbican.workers_fanout_18b3fae6b5b5421e97ec3da19073b569 2026-04-05 07:38:55.095229 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: barbican.workers_fanout_4ddc118d8202414da266bb205a703361 2026-04-05 07:38:55.136690 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: barbican.workers_fanout_d74481b6d2214dab8155a19eedc7479c 2026-04-05 07:38:55.185755 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: barbican_notifications.info 2026-04-05 07:38:55.221076 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central 2026-04-05 07:38:55.260644 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central.testbed-node-0 2026-04-05 07:38:55.311256 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central.testbed-node-1 2026-04-05 07:38:55.371549 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central.testbed-node-2 2026-04-05 07:38:55.411998 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central_fanout_383abf8f2f994cce8dbe19fe95774187 2026-04-05 07:38:55.454280 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central_fanout_618192fe4c984e8f9d5f8f5453fdf454 2026-04-05 07:38:55.494484 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central_fanout_69eb92e2080c496e98dd0ae944b48875 2026-04-05 07:38:55.537402 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central_fanout_7113c27131b049eb9a2ce248a95af3e8 2026-04-05 07:38:55.582151 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central_fanout_c50ebf563a244b5bae4111f58f5330cf 2026-04-05 07:38:55.636314 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: central_fanout_ed2957344d104be8a65f4d5814f9e42e 2026-04-05 07:38:55.679300 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: cinder-backup 2026-04-05 07:38:55.735361 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: cinder-backup.testbed-node-0 2026-04-05 07:38:55.780811 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: cinder-backup.testbed-node-1 2026-04-05 07:38:55.819307 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: cinder-backup.testbed-node-2 2026-04-05 07:38:55.863300 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: cinder-scheduler 2026-04-05 07:38:55.912858 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: cinder-scheduler.testbed-node-0 2026-04-05 07:38:55.953330 | orchestrator | 2026-04-05 07:38:55 | INFO  | Deleted queue: cinder-scheduler.testbed-node-1 2026-04-05 07:38:55.998122 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-scheduler.testbed-node-2 2026-04-05 07:38:56.054214 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-volume 2026-04-05 07:38:56.103124 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes 2026-04-05 07:38:56.156464 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 2026-04-05 07:38:56.196297 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes 2026-04-05 07:38:56.244374 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 2026-04-05 07:38:56.291765 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes 2026-04-05 07:38:56.352147 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 2026-04-05 07:38:56.394853 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: compute 2026-04-05 07:38:56.441020 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: compute.testbed-node-3 2026-04-05 07:38:56.489986 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: compute.testbed-node-4 2026-04-05 07:38:56.533217 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: compute.testbed-node-5 2026-04-05 07:38:56.571271 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: conductor 2026-04-05 07:38:56.616323 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: conductor.testbed-node-0 2026-04-05 07:38:56.657235 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: conductor.testbed-node-1 2026-04-05 07:38:56.700535 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: conductor.testbed-node-2 2026-04-05 07:38:56.753264 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: event.sample 2026-04-05 07:38:56.784252 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.10:44698 -> 192.168.16.10:5672 2026-04-05 07:38:56.795752 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.10:44618 -> 192.168.16.10:5672 2026-04-05 07:38:56.814158 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.11:58440 -> 192.168.16.11:5672 2026-04-05 07:38:56.828266 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.10:44684 -> 192.168.16.10:5672 2026-04-05 07:38:56.846823 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.12:43208 -> 192.168.16.11:5672 2026-04-05 07:38:56.859406 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.11:36106 -> 192.168.16.10:5672 2026-04-05 07:38:56.875729 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.12:41614 -> 192.168.16.10:5672 2026-04-05 07:38:56.892339 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.11:36028 -> 192.168.16.10:5672 2026-04-05 07:38:56.909874 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed connection: 192.168.16.12:41606 -> 192.168.16.10:5672 2026-04-05 07:38:56.910228 | orchestrator | 2026-04-05 07:38:56 | INFO  | Closed 9 connection(s) for queue: magnum-conductor 2026-04-05 07:38:56.947519 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: magnum-conductor 2026-04-05 07:38:56.995833 | orchestrator | 2026-04-05 07:38:56 | INFO  | Deleted queue: magnum-conductor.7ristaccxjw2 2026-04-05 07:38:57.082673 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor.pwl742r3p6gf 2026-04-05 07:38:57.130654 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor.ytinpazuiaaz 2026-04-05 07:38:57.160262 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_013aea6961e148de939216aebc1aea8f 2026-04-05 07:38:57.192173 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_37125cf6dd734863b0fed6dc3045a5a3 2026-04-05 07:38:57.233586 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_735461c06f214014a111a40e0d999058 2026-04-05 07:38:57.263401 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_7d980fdfb1ab4fa6a6ceaa55f4c071f9 2026-04-05 07:38:57.293406 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_7db7ce94ef8c4e1bbf1b0f4053e347f9 2026-04-05 07:38:57.325877 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_7f9fd00ef4b3425f8ae8e286f4aa7bb8 2026-04-05 07:38:57.357506 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_a55b0a482f774f11b32bff88a473874e 2026-04-05 07:38:57.401087 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_d5dedb15ef3a412082262ab047959cfa 2026-04-05 07:38:57.438334 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: magnum-conductor_fanout_df7d0e19aae04ad482ce69521ae3a8f7 2026-04-05 07:38:57.484454 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-data 2026-04-05 07:38:57.532722 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-data.testbed-node-0 2026-04-05 07:38:57.580309 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-data.testbed-node-1 2026-04-05 07:38:57.625247 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-data.testbed-node-2 2026-04-05 07:38:57.668564 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-scheduler 2026-04-05 07:38:57.707442 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-scheduler.testbed-node-0 2026-04-05 07:38:57.744996 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-scheduler.testbed-node-1 2026-04-05 07:38:57.792113 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-scheduler.testbed-node-2 2026-04-05 07:38:57.826987 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-share 2026-04-05 07:38:57.882072 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-share.testbed-node-0@cephfsnative1 2026-04-05 07:38:57.924094 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-share.testbed-node-1@cephfsnative1 2026-04-05 07:38:57.962326 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-share.testbed-node-2@cephfsnative1 2026-04-05 07:38:57.992801 | orchestrator | 2026-04-05 07:38:57 | INFO  | Deleted queue: manila-share_fanout_03dbe29bbda1476290e4cc7fcf03cc4c 2026-04-05 07:38:58.060261 | orchestrator | 2026-04-05 07:38:58 | INFO  | Deleted queue: manila-share_fanout_b50e428a08f04c24b71ced18f8c3bd38 2026-04-05 07:38:58.110799 | orchestrator | 2026-04-05 07:38:58 | INFO  | Deleted queue: manila-share_fanout_fbaff35712e444ab89d1cfcf2af7b4b3 2026-04-05 07:38:58.295090 | orchestrator | 2026-04-05 07:38:58 | INFO  | Deleted queue: notifications.audit 2026-04-05 07:38:58.471757 | orchestrator | 2026-04-05 07:38:58 | INFO  | Deleted queue: notifications.critical 2026-04-05 07:38:58.646658 | orchestrator | 2026-04-05 07:38:58 | INFO  | Deleted queue: notifications.debug 2026-04-05 07:38:58.780611 | orchestrator | 2026-04-05 07:38:58 | INFO  | Deleted queue: notifications.error 2026-04-05 07:38:58.902746 | orchestrator | 2026-04-05 07:38:58 | INFO  | Deleted queue: notifications.info 2026-04-05 07:38:59.040068 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: notifications.sample 2026-04-05 07:38:59.214882 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: notifications.warn 2026-04-05 07:38:59.260788 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: octavia_provisioning_v2 2026-04-05 07:38:59.305277 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-0 2026-04-05 07:38:59.343176 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-1 2026-04-05 07:38:59.394894 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-2 2026-04-05 07:38:59.427151 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer 2026-04-05 07:38:59.467684 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer.testbed-node-0 2026-04-05 07:38:59.503577 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer.testbed-node-1 2026-04-05 07:38:59.539291 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer.testbed-node-2 2026-04-05 07:38:59.573348 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer_fanout_37d7250190064fbeb7141ea09ebad6fe 2026-04-05 07:38:59.613870 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer_fanout_7ac108ae955243c5934e01feca93ea73 2026-04-05 07:38:59.654413 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer_fanout_8109401e79994bd399597e4538316791 2026-04-05 07:38:59.696826 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer_fanout_a8bfe0d57eed4c05bd2dd1bc5af5fe1d 2026-04-05 07:38:59.728214 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer_fanout_b5b7a4c87010407ab2092c7b7688b6f4 2026-04-05 07:38:59.767019 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: producer_fanout_da99d47fff2d4411bc44c37e937245a0 2026-04-05 07:38:59.808360 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: q-plugin 2026-04-05 07:38:59.853877 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: q-plugin.testbed-node-0 2026-04-05 07:38:59.909115 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: q-plugin.testbed-node-1 2026-04-05 07:38:59.950963 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: q-plugin.testbed-node-2 2026-04-05 07:38:59.988444 | orchestrator | 2026-04-05 07:38:59 | INFO  | Deleted queue: q-reports-plugin 2026-04-05 07:39:00.050094 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: q-reports-plugin.testbed-node-0 2026-04-05 07:39:00.096221 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: q-reports-plugin.testbed-node-1 2026-04-05 07:39:00.139453 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: q-reports-plugin.testbed-node-2 2026-04-05 07:39:00.187272 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: q-server-resource-versions 2026-04-05 07:39:00.237011 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-0 2026-04-05 07:39:00.282374 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-1 2026-04-05 07:39:00.343632 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-2 2026-04-05 07:39:00.380089 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_0a63a28b086b4251b3fa295a09058239 2026-04-05 07:39:00.420812 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_2bcb4a1834a3476c9ff3efe3996e6279 2026-04-05 07:39:00.480003 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_3a57173f55ee4d168922c1fbc207cbd5 2026-04-05 07:39:00.518588 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_48a490fb8e0949b5ada240f164cc30fe 2026-04-05 07:39:00.562361 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_5ad84183739c48cb95db6208c9c5ca6e 2026-04-05 07:39:00.605468 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_5edcf49a330a421eb5777d81b8755cdd 2026-04-05 07:39:00.644783 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_5fb05cb52f484dd2814e72d0e2ee98c0 2026-04-05 07:39:00.675845 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_712981ae54bc4e6587b145067f4ec85a 2026-04-05 07:39:00.711187 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_8876fac2b0924ff4a6c462edf94c4e8d 2026-04-05 07:39:00.749876 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_8f343648f282475bb94dd5265e7d1303 2026-04-05 07:39:00.788468 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: reply_dfa5756900bc4c7bbced0b497ca4d3ce 2026-04-05 07:39:00.835564 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: scheduler 2026-04-05 07:39:00.890656 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: scheduler.testbed-node-0 2026-04-05 07:39:00.932878 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: scheduler.testbed-node-1 2026-04-05 07:39:00.979298 | orchestrator | 2026-04-05 07:39:00 | INFO  | Deleted queue: scheduler.testbed-node-2 2026-04-05 07:39:01.027277 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker 2026-04-05 07:39:01.074329 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker.testbed-node-0 2026-04-05 07:39:01.123342 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker.testbed-node-1 2026-04-05 07:39:01.172631 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker.testbed-node-2 2026-04-05 07:39:01.213723 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker_fanout_31de4b73eef6424b9e0fb59b5684b6c7 2026-04-05 07:39:01.263760 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker_fanout_3334b2477c5d4120b0ac4268c9480f64 2026-04-05 07:39:01.305149 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker_fanout_ab1fa9e0318d418db1446a10829cad0f 2026-04-05 07:39:01.344308 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker_fanout_b4322b25e937465caffc6999bacf90f2 2026-04-05 07:39:01.378644 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker_fanout_b4898e808be14f60a5861b12b1fa428e 2026-04-05 07:39:01.426750 | orchestrator | 2026-04-05 07:39:01 | INFO  | Deleted queue: worker_fanout_ef2ef92dedcd43e2825ab858da35f1fc 2026-04-05 07:39:01.426991 | orchestrator | 2026-04-05 07:39:01 | INFO  | Successfully deleted 128 queue(s) in vhost '/' 2026-04-05 07:39:01.701656 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-05 07:39:07.930393 | orchestrator | 2026-04-05 07:39:07 | ERROR  | Unable to get ansible vault password 2026-04-05 07:39:07.930500 | orchestrator | 2026-04-05 07:39:07 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 07:39:07.930546 | orchestrator | 2026-04-05 07:39:07 | ERROR  | Dropping encrypted entries 2026-04-05 07:39:07.964276 | orchestrator | 2026-04-05 07:39:07 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 07:39:08.202498 | orchestrator | 2026-04-05 07:39:08 | INFO  | Found 13 classic queue(s) in vhost '/': 2026-04-05 07:39:08.202644 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-05 07:39:08.202663 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor.7ristaccxjw2 (vhost: /, messages: 0) 2026-04-05 07:39:08.202675 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor.pwl742r3p6gf (vhost: /, messages: 0) 2026-04-05 07:39:08.202686 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor.ytinpazuiaaz (vhost: /, messages: 0) 2026-04-05 07:39:08.202698 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_013aea6961e148de939216aebc1aea8f (vhost: /, messages: 0) 2026-04-05 07:39:08.202711 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_37125cf6dd734863b0fed6dc3045a5a3 (vhost: /, messages: 0) 2026-04-05 07:39:08.202819 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_735461c06f214014a111a40e0d999058 (vhost: /, messages: 0) 2026-04-05 07:39:08.202835 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_7d980fdfb1ab4fa6a6ceaa55f4c071f9 (vhost: /, messages: 0) 2026-04-05 07:39:08.202846 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_7db7ce94ef8c4e1bbf1b0f4053e347f9 (vhost: /, messages: 0) 2026-04-05 07:39:08.202856 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_7f9fd00ef4b3425f8ae8e286f4aa7bb8 (vhost: /, messages: 0) 2026-04-05 07:39:08.202885 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_a55b0a482f774f11b32bff88a473874e (vhost: /, messages: 0) 2026-04-05 07:39:08.202897 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_d5dedb15ef3a412082262ab047959cfa (vhost: /, messages: 0) 2026-04-05 07:39:08.202942 | orchestrator | 2026-04-05 07:39:08 | INFO  |  - magnum-conductor_fanout_df7d0e19aae04ad482ce69521ae3a8f7 (vhost: /, messages: 0) 2026-04-05 07:39:08.439450 | orchestrator | + osism migrate rabbitmq3to4 list --vhost openstack --quorum 2026-04-05 07:39:14.689302 | orchestrator | 2026-04-05 07:39:14 | ERROR  | Unable to get ansible vault password 2026-04-05 07:39:14.689413 | orchestrator | 2026-04-05 07:39:14 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 07:39:14.689430 | orchestrator | 2026-04-05 07:39:14 | ERROR  | Dropping encrypted entries 2026-04-05 07:39:14.723044 | orchestrator | 2026-04-05 07:39:14 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 07:39:14.895476 | orchestrator | 2026-04-05 07:39:14 | INFO  | Found 192 quorum queue(s) in vhost 'openstack': 2026-04-05 07:39:14.895594 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - alarm.all.sample (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896287 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - alarming.sample (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896326 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - barbican.workers (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896348 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - barbican.workers.barbican.queue (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896370 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - barbican.workers_fanout_testbed-node-0:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896416 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - barbican.workers_fanout_testbed-node-1:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896427 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - barbican.workers_fanout_testbed-node-2:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896438 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - barbican_notifications.info (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896449 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896467 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896486 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896816 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896881 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central_fanout_testbed-node-0:designate-central:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896895 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central_fanout_testbed-node-0:designate-central:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896941 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central_fanout_testbed-node-1:designate-central:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896953 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central_fanout_testbed-node-1:designate-central:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896964 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central_fanout_testbed-node-2:designate-central:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.896975 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - central_fanout_testbed-node-2:designate-central:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897043 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-backup (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897057 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-backup.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897068 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-backup.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897079 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-backup.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897632 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-backup_fanout_testbed-node-0:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897732 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-backup_fanout_testbed-node-1:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897756 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-backup_fanout_testbed-node-2:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897774 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-scheduler (vhost: openstack, messages: 0) 2026-04-05 07:39:14.897978 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898006 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898102 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898125 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-scheduler_fanout_testbed-node-0:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898165 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-scheduler_fanout_testbed-node-1:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898388 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-scheduler_fanout_testbed-node-2:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898419 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898439 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: openstack, messages: 0) 2026-04-05 07:39:14.898457 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899155 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_testbed-node-0:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899261 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899284 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899410 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_testbed-node-1:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899437 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899459 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899772 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_testbed-node-2:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899795 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume_fanout_testbed-node-0:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899807 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume_fanout_testbed-node-1:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899817 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - cinder-volume_fanout_testbed-node-2:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899829 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - compute (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899840 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - compute.testbed-node-3 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899851 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - compute.testbed-node-4 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899861 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - compute.testbed-node-5 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.899872 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - compute_fanout_testbed-node-3:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900204 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - compute_fanout_testbed-node-4:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900233 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - compute_fanout_testbed-node-5:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900249 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900281 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900554 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900577 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900587 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900729 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900745 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.900755 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.901140 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.901163 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.901173 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - event.sample (vhost: openstack, messages: 6) 2026-04-05 07:39:14.901186 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-data (vhost: openstack, messages: 0) 2026-04-05 07:39:14.901202 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-data.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.901525 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-data.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.901561 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-data.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.902146 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-data_fanout_testbed-node-0:manila-data:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.902181 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-data_fanout_testbed-node-1:manila-data:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.902191 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-data_fanout_testbed-node-2:manila-data:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.902201 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-scheduler (vhost: openstack, messages: 0) 2026-04-05 07:39:14.902211 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.902985 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903074 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903090 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-scheduler_fanout_testbed-node-0:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903125 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-scheduler_fanout_testbed-node-1:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903148 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-scheduler_fanout_testbed-node-2:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903160 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-share (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903171 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903183 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903215 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903333 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-share_fanout_testbed-node-0:manila-share:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903350 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-share_fanout_testbed-node-1:manila-share:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903373 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - manila-share_fanout_testbed-node-2:manila-share:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903385 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - notifications.audit (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903794 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - notifications.critical (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903849 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - notifications.debug (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903860 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - notifications.error (vhost: openstack, messages: 0) 2026-04-05 07:39:14.903990 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - notifications.info (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904007 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - notifications.sample (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904017 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - notifications.warn (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904026 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - octavia_provisioning_v2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904096 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904300 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904318 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904328 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-0:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904544 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-1:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904570 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-2:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904664 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - osism-listener-cinder (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904678 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - osism-listener-glance (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904922 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - osism-listener-ironic (vhost: openstack, messages: 0) 2026-04-05 07:39:14.904942 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - osism-listener-keystone (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905059 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - osism-listener-neutron (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905076 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - osism-listener-nova (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905512 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905532 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905557 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905707 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905725 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905735 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905920 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.905939 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906060 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906078 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906413 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906434 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906672 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906692 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906702 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.906787 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907048 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907064 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907073 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907080 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907216 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907604 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907620 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907628 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907805 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907831 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907840 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.907896 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908134 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908162 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908219 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908402 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908417 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908531 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908749 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.908773 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909028 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909047 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909600 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909617 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909625 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909641 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909805 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909820 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909930 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.909996 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910006 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910106 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910259 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910272 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910280 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910417 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910669 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.910821 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911023 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911198 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911209 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911343 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911354 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-0:designate-manage:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911361 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-0:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911553 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-0:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911563 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-1:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911757 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-1:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911769 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-2:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911777 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-2:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911935 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-3:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.911952 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-4:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.912959 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - reply_testbed-node-5:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.912974 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler (vhost: openstack, messages: 0) 2026-04-05 07:39:14.912988 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.912995 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913001 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913007 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913469 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913483 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913497 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913503 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913509 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913516 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913662 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.913792 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.914106 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.914120 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.914259 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.914320 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.914564 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.914753 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-05 07:39:14.914974 | orchestrator | 2026-04-05 07:39:14 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-05 07:39:15.174371 | orchestrator | + osism migrate rabbitmq3to4 delete-exchanges 2026-04-05 07:39:21.556096 | orchestrator | 2026-04-05 07:39:21 | ERROR  | Unable to get ansible vault password 2026-04-05 07:39:21.556197 | orchestrator | 2026-04-05 07:39:21 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 07:39:21.556215 | orchestrator | 2026-04-05 07:39:21 | ERROR  | Dropping encrypted entries 2026-04-05 07:39:21.589631 | orchestrator | 2026-04-05 07:39:21 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 07:39:21.610721 | orchestrator | 2026-04-05 07:39:21 | INFO  | Found 27 exchange(s) in vhost '/' 2026-04-05 07:39:21.654770 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: aodh 2026-04-05 07:39:21.701786 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: ceilometer 2026-04-05 07:39:21.739687 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: cinder 2026-04-05 07:39:21.777255 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: designate 2026-04-05 07:39:21.816950 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: dns 2026-04-05 07:39:21.862242 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: glance 2026-04-05 07:39:21.903755 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: heat 2026-04-05 07:39:21.946892 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: ironic 2026-04-05 07:39:21.987769 | orchestrator | 2026-04-05 07:39:21 | INFO  | Deleted exchange: keystone 2026-04-05 07:39:22.025610 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: l3_agent_fanout 2026-04-05 07:39:22.078423 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: magnum 2026-04-05 07:39:22.144968 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: magnum-conductor_fanout 2026-04-05 07:39:22.185193 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: neutron 2026-04-05 07:39:22.214700 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: neutron-vo-Network-1.1_fanout 2026-04-05 07:39:22.253556 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: neutron-vo-Port-1.10_fanout 2026-04-05 07:39:22.294861 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: neutron-vo-SecurityGroup-1.6_fanout 2026-04-05 07:39:22.331186 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: neutron-vo-SecurityGroupRule-1.3_fanout 2026-04-05 07:39:22.367074 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: neutron-vo-Subnet-1.2_fanout 2026-04-05 07:39:22.409067 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: nova 2026-04-05 07:39:22.446133 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: octavia 2026-04-05 07:39:22.484260 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: openstack 2026-04-05 07:39:22.516233 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: q-agent-notifier-port-update_fanout 2026-04-05 07:39:22.557736 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: q-agent-notifier-security_group-update_fanout 2026-04-05 07:39:22.592110 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: scheduler_fanout 2026-04-05 07:39:22.636606 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: swift 2026-04-05 07:39:22.681975 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: trove 2026-04-05 07:39:22.729049 | orchestrator | 2026-04-05 07:39:22 | INFO  | Deleted exchange: zaqar 2026-04-05 07:39:22.729240 | orchestrator | 2026-04-05 07:39:22 | INFO  | Successfully deleted 27 exchange(s) in vhost '/' 2026-04-05 07:39:22.996008 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-05 07:39:28.814743 | orchestrator | 2026-04-05 07:39:28 | ERROR  | Unable to get ansible vault password 2026-04-05 07:39:28.814849 | orchestrator | 2026-04-05 07:39:28 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-05 07:39:28.814876 | orchestrator | 2026-04-05 07:39:28 | ERROR  | Dropping encrypted entries 2026-04-05 07:39:28.841780 | orchestrator | 2026-04-05 07:39:28 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-05 07:39:28.852061 | orchestrator | 2026-04-05 07:39:28 | INFO  | No exchanges found in vhost '/' 2026-04-05 07:39:28.994189 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-05 07:39:28.994256 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/400-monitoring.sh 2026-04-05 07:39:30.178267 | orchestrator | 2026-04-05 07:39:30 | INFO  | Prepare task for execution of prometheus. 2026-04-05 07:39:30.236667 | orchestrator | 2026-04-05 07:39:30 | INFO  | Task 21f6760d-1484-402d-9185-e2359c684420 (prometheus) was prepared for execution. 2026-04-05 07:39:30.236742 | orchestrator | 2026-04-05 07:39:30 | INFO  | It takes a moment until task 21f6760d-1484-402d-9185-e2359c684420 (prometheus) has been started and output is visible here. 2026-04-05 07:39:46.083837 | orchestrator | 2026-04-05 07:39:46.084002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:39:46.084022 | orchestrator | 2026-04-05 07:39:46.084035 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:39:46.084074 | orchestrator | Sunday 05 April 2026 07:39:34 +0000 (0:00:01.598) 0:00:01.598 ********** 2026-04-05 07:39:46.084086 | orchestrator | ok: [testbed-manager] 2026-04-05 07:39:46.084099 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:39:46.084109 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:39:46.084120 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:39:46.084131 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:39:46.084142 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:39:46.084152 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:39:46.084163 | orchestrator | 2026-04-05 07:39:46.084174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:39:46.084185 | orchestrator | Sunday 05 April 2026 07:39:37 +0000 (0:00:02.805) 0:00:04.403 ********** 2026-04-05 07:39:46.084197 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-05 07:39:46.084256 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-05 07:39:46.084269 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-05 07:39:46.084287 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-05 07:39:46.084305 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-05 07:39:46.084323 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-05 07:39:46.084342 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-05 07:39:46.084363 | orchestrator | 2026-04-05 07:39:46.084383 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-05 07:39:46.084403 | orchestrator | 2026-04-05 07:39:46.084417 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 07:39:46.084430 | orchestrator | Sunday 05 April 2026 07:39:40 +0000 (0:00:02.469) 0:00:06.873 ********** 2026-04-05 07:39:46.084487 | orchestrator | included: /ansible/roles/prometheus/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 07:39:46.084504 | orchestrator | 2026-04-05 07:39:46.084518 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-05 07:39:46.084530 | orchestrator | Sunday 05 April 2026 07:39:43 +0000 (0:00:02.998) 0:00:09.871 ********** 2026-04-05 07:39:46.084546 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:46.084568 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 07:39:46.084585 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:46.084631 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.084646 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:46.084660 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:46.084679 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:46.084692 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.084707 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.084722 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.084745 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.084765 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.811126 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:46.811237 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:46.811253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.811266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.811280 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:39:46.811315 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.811345 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.811356 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.811371 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.811382 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.811392 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.811409 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:46.811419 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.811429 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:46.811447 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:39:54.114721 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:54.114844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:54.114870 | orchestrator | 2026-04-05 07:39:54.114941 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-05 07:39:54.114954 | orchestrator | Sunday 05 April 2026 07:39:48 +0000 (0:00:05.338) 0:00:15.210 ********** 2026-04-05 07:39:54.114964 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-05 07:39:54.114975 | orchestrator | 2026-04-05 07:39:54.114984 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-05 07:39:54.114993 | orchestrator | Sunday 05 April 2026 07:39:51 +0000 (0:00:02.946) 0:00:18.156 ********** 2026-04-05 07:39:54.115031 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 07:39:54.115043 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:54.115053 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:54.115080 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:54.115097 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:54.115106 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:54.115115 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:54.115131 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:39:54.115141 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:54.115151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:54.115160 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:54.115177 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:56.433005 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433105 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433143 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433155 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433166 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:56.433178 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433188 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:56.433258 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433279 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:39:56.433299 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:56.433309 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433319 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433329 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:56.433340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:39:56.433361 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:59.849377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:59.849476 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:39:59.849489 | orchestrator | 2026-04-05 07:39:59.849499 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-05 07:39:59.849507 | orchestrator | Sunday 05 April 2026 07:39:58 +0000 (0:00:06.937) 0:00:25.094 ********** 2026-04-05 07:39:59.849515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:39:59.849523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:39:59.849532 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 07:39:59.849557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:39:59.849601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:39:59.849609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:39:59.849616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:39:59.849623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:39:59.849628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:39:59.849634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:39:59.849640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:39:59.849655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:01.035502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:01.035628 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:40:01.035657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:01.035680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:01.035704 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:40:01.035722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:01.035761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:01.035797 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:40:01.035829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:01.035842 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:40:01.035854 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:01.035865 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:40:01.035930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:01.035942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:01.035954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:40:01.035965 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:40:01.035977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:01.035996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:01.036022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:40:03.888554 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:40:03.888662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:03.888681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:03.888694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:40:03.888706 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:40:03.888717 | orchestrator | 2026-04-05 07:40:03.888729 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-05 07:40:03.888742 | orchestrator | Sunday 05 April 2026 07:40:02 +0000 (0:00:04.059) 0:00:29.154 ********** 2026-04-05 07:40:03.888754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:03.888769 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 07:40:03.888822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:03.888855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:03.888867 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:03.888948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:03.888960 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:03.888972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:03.888992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:03.889009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:03.889029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:04.551443 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:40:04.551548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:04.551568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:04.551582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:04.551593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:04.551629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:40:04.551641 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:40:04.551653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:04.551701 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:40:04.551716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:04.551728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:04.551740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:04.551758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:04.551770 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:40:04.551781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:04.551797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:40:04.551809 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:40:04.551828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:09.431437 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:40:09.431535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:40:09.431552 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:40:09.431565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:40:09.431577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:40:09.431623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:40:09.431636 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:40:09.431648 | orchestrator | 2026-04-05 07:40:09.431659 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-05 07:40:09.431671 | orchestrator | Sunday 05 April 2026 07:40:06 +0000 (0:00:04.133) 0:00:33.287 ********** 2026-04-05 07:40:09.431698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 07:40:09.431730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:40:09.431743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:40:09.431754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:40:09.431773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:40:09.431784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:40:09.431796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:40:09.431807 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:40:09.431824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:09.431843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:11.526156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:11.526195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526221 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:11.526258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:11.526286 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:11.526317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526342 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:40:11.526360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:40:11.526391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:40:47.048945 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:47.049104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:47.049127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:47.049140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:40:47.049152 | orchestrator | 2026-04-05 07:40:47.049165 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-05 07:40:47.049178 | orchestrator | Sunday 05 April 2026 07:40:13 +0000 (0:00:07.062) 0:00:40.349 ********** 2026-04-05 07:40:47.049189 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 07:40:47.049201 | orchestrator | 2026-04-05 07:40:47.049212 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-05 07:40:47.049222 | orchestrator | Sunday 05 April 2026 07:40:15 +0000 (0:00:02.305) 0:00:42.654 ********** 2026-04-05 07:40:47.049233 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:40:47.049309 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:40:47.049322 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:40:47.049333 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:40:47.049344 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:40:47.049354 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:40:47.049365 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:40:47.049385 | orchestrator | 2026-04-05 07:40:47.049408 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-05 07:40:47.049428 | orchestrator | Sunday 05 April 2026 07:40:17 +0000 (0:00:01.962) 0:00:44.617 ********** 2026-04-05 07:40:47.049449 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 07:40:47.049468 | orchestrator | 2026-04-05 07:40:47.049488 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-05 07:40:47.049509 | orchestrator | Sunday 05 April 2026 07:40:19 +0000 (0:00:01.799) 0:00:46.416 ********** 2026-04-05 07:40:47.049568 | orchestrator | [WARNING]: Skipped 2026-04-05 07:40:47.049586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049600 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-05 07:40:47.049612 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049626 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-05 07:40:47.049639 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:40:47.049652 | orchestrator | [WARNING]: Skipped 2026-04-05 07:40:47.049664 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049677 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-05 07:40:47.049690 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049703 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-05 07:40:47.049716 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 07:40:47.049728 | orchestrator | [WARNING]: Skipped 2026-04-05 07:40:47.049741 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049755 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-05 07:40:47.049784 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049796 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-05 07:40:47.049807 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-05 07:40:47.049818 | orchestrator | [WARNING]: Skipped 2026-04-05 07:40:47.049829 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049839 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-05 07:40:47.049917 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049931 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-05 07:40:47.049942 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-05 07:40:47.049953 | orchestrator | [WARNING]: Skipped 2026-04-05 07:40:47.049964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049975 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-05 07:40:47.049985 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.049996 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-05 07:40:47.050007 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-05 07:40:47.050075 | orchestrator | [WARNING]: Skipped 2026-04-05 07:40:47.050088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.050099 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-05 07:40:47.050110 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.050120 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-05 07:40:47.050131 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-05 07:40:47.050141 | orchestrator | [WARNING]: Skipped 2026-04-05 07:40:47.050152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.050163 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-05 07:40:47.050174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-05 07:40:47.050185 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-05 07:40:47.050195 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-05 07:40:47.050206 | orchestrator | 2026-04-05 07:40:47.050217 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-05 07:40:47.050228 | orchestrator | Sunday 05 April 2026 07:40:22 +0000 (0:00:03.115) 0:00:49.532 ********** 2026-04-05 07:40:47.050250 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 07:40:47.050262 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:40:47.050273 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 07:40:47.050284 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:40:47.050295 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 07:40:47.050305 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 07:40:47.050316 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:40:47.050327 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:40:47.050337 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 07:40:47.050348 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:40:47.050366 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-05 07:40:47.050377 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:40:47.050388 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-05 07:40:47.050399 | orchestrator | 2026-04-05 07:40:47.050410 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-05 07:40:47.050421 | orchestrator | Sunday 05 April 2026 07:40:41 +0000 (0:00:18.767) 0:01:08.300 ********** 2026-04-05 07:40:47.050432 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 07:40:47.050443 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:40:47.050453 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 07:40:47.050464 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:40:47.050475 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 07:40:47.050485 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:40:47.050496 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 07:40:47.050507 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:40:47.050518 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 07:40:47.050528 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:40:47.050547 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-05 07:40:47.050566 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:40:47.050584 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-05 07:40:47.050603 | orchestrator | 2026-04-05 07:40:47.050620 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-05 07:40:47.050638 | orchestrator | Sunday 05 April 2026 07:40:46 +0000 (0:00:04.551) 0:01:12.851 ********** 2026-04-05 07:40:47.050668 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 07:41:29.784913 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 07:41:29.785028 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 07:41:29.785045 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 07:41:29.785057 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-05 07:41:29.785068 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.785105 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.785116 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.785127 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.785138 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 07:41:29.785149 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.785160 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-05 07:41:29.785170 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.785181 | orchestrator | 2026-04-05 07:41:29.785193 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-05 07:41:29.785205 | orchestrator | Sunday 05 April 2026 07:40:49 +0000 (0:00:03.388) 0:01:16.240 ********** 2026-04-05 07:41:29.785267 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 07:41:29.785279 | orchestrator | 2026-04-05 07:41:29.785290 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-05 07:41:29.785302 | orchestrator | Sunday 05 April 2026 07:40:51 +0000 (0:00:01.822) 0:01:18.062 ********** 2026-04-05 07:41:29.785312 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:41:29.785323 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.785333 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.785344 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.785354 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.785365 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.785375 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.785386 | orchestrator | 2026-04-05 07:41:29.785399 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-05 07:41:29.785412 | orchestrator | Sunday 05 April 2026 07:40:53 +0000 (0:00:02.147) 0:01:20.209 ********** 2026-04-05 07:41:29.785425 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:41:29.785438 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.785450 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.785462 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.785474 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:41:29.785487 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:41:29.785500 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:41:29.785512 | orchestrator | 2026-04-05 07:41:29.785525 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-05 07:41:29.785537 | orchestrator | Sunday 05 April 2026 07:40:57 +0000 (0:00:03.883) 0:01:24.093 ********** 2026-04-05 07:41:29.785549 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 07:41:29.785562 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 07:41:29.785588 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 07:41:29.785601 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.785613 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:41:29.785625 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.785638 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 07:41:29.785650 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.785663 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 07:41:29.785675 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.785687 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 07:41:29.785699 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.785712 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-05 07:41:29.785725 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.785737 | orchestrator | 2026-04-05 07:41:29.785750 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-05 07:41:29.785770 | orchestrator | Sunday 05 April 2026 07:41:00 +0000 (0:00:03.217) 0:01:27.310 ********** 2026-04-05 07:41:29.785781 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 07:41:29.785792 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.785803 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 07:41:29.785813 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.785824 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 07:41:29.785875 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.785886 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 07:41:29.785897 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.785924 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 07:41:29.785936 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.785947 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-05 07:41:29.785958 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-05 07:41:29.785968 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.785979 | orchestrator | 2026-04-05 07:41:29.785989 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-05 07:41:29.786000 | orchestrator | Sunday 05 April 2026 07:41:03 +0000 (0:00:03.129) 0:01:30.440 ********** 2026-04-05 07:41:29.786011 | orchestrator | [WARNING]: Skipped 2026-04-05 07:41:29.786084 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-05 07:41:29.786096 | orchestrator | due to this access issue: 2026-04-05 07:41:29.786106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-05 07:41:29.786117 | orchestrator | not a directory 2026-04-05 07:41:29.786128 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-05 07:41:29.786138 | orchestrator | 2026-04-05 07:41:29.786149 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-05 07:41:29.786160 | orchestrator | Sunday 05 April 2026 07:41:05 +0000 (0:00:02.261) 0:01:32.702 ********** 2026-04-05 07:41:29.786171 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:41:29.786181 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.786192 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.786202 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.786213 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.786223 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.786234 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.786244 | orchestrator | 2026-04-05 07:41:29.786255 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-05 07:41:29.786265 | orchestrator | Sunday 05 April 2026 07:41:08 +0000 (0:00:02.120) 0:01:34.822 ********** 2026-04-05 07:41:29.786276 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:41:29.786287 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.786297 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.786308 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.786318 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.786329 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.786339 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.786350 | orchestrator | 2026-04-05 07:41:29.786361 | orchestrator | TASK [prometheus : Check for the existence of Prometheus v2 container volume] *** 2026-04-05 07:41:29.786371 | orchestrator | Sunday 05 April 2026 07:41:10 +0000 (0:00:02.648) 0:01:37.471 ********** 2026-04-05 07:41:29.786390 | orchestrator | ok: [testbed-manager] 2026-04-05 07:41:29.786401 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:41:29.786411 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:41:29.786422 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:41:29.786432 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:41:29.786443 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:41:29.786453 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:41:29.786464 | orchestrator | 2026-04-05 07:41:29.786475 | orchestrator | TASK [prometheus : Gracefully stop Prometheus] ********************************* 2026-04-05 07:41:29.786485 | orchestrator | Sunday 05 April 2026 07:41:13 +0000 (0:00:02.587) 0:01:40.058 ********** 2026-04-05 07:41:29.786496 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.786507 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.786517 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.786528 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.786538 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.786555 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.786566 | orchestrator | changed: [testbed-manager] 2026-04-05 07:41:29.786577 | orchestrator | 2026-04-05 07:41:29.786588 | orchestrator | TASK [prometheus : Create new Prometheus v3 volume] **************************** 2026-04-05 07:41:29.786598 | orchestrator | Sunday 05 April 2026 07:41:21 +0000 (0:00:08.155) 0:01:48.214 ********** 2026-04-05 07:41:29.786609 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.786620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.786630 | orchestrator | changed: [testbed-manager] 2026-04-05 07:41:29.786641 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.786651 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.786662 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.786672 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.786683 | orchestrator | 2026-04-05 07:41:29.786694 | orchestrator | TASK [prometheus : Move _data from old to new volume] ************************** 2026-04-05 07:41:29.786705 | orchestrator | Sunday 05 April 2026 07:41:23 +0000 (0:00:02.356) 0:01:50.571 ********** 2026-04-05 07:41:29.786716 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.786726 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.786737 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.786748 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.786758 | orchestrator | changed: [testbed-manager] 2026-04-05 07:41:29.786769 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.786780 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.786790 | orchestrator | 2026-04-05 07:41:29.786801 | orchestrator | TASK [prometheus : Remove old Prometheus v2 volume] **************************** 2026-04-05 07:41:29.786812 | orchestrator | Sunday 05 April 2026 07:41:26 +0000 (0:00:02.191) 0:01:52.762 ********** 2026-04-05 07:41:29.786822 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:29.786856 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:29.786867 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:29.786878 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:29.786888 | orchestrator | changed: [testbed-manager] 2026-04-05 07:41:29.786899 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:29.786909 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:41:29.786920 | orchestrator | 2026-04-05 07:41:29.786930 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-05 07:41:29.786941 | orchestrator | Sunday 05 April 2026 07:41:28 +0000 (0:00:02.532) 0:01:55.295 ********** 2026-04-05 07:41:29.786967 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-05 07:41:31.692923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:41:31.693049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:41:31.693093 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:41:31.693112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:41:31.693128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:41:31.693145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:41:31.693164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-05 07:41:31.693230 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:41:31.693244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:41:31.693255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:41:31.693272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:31.693284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:31.693294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:41:31.693304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:31.693331 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:41:37.875175 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:41:37.875303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:41:37.875320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-05 07:41:37.875331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:37.875342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:37.875376 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:37.875389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:37.875420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:41:37.875434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:41:37.875451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-05 07:41:37.875467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:37.875487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:37.875518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-05 07:41:37.875538 | orchestrator | 2026-04-05 07:41:37.875558 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-05 07:41:37.875577 | orchestrator | Sunday 05 April 2026 07:41:35 +0000 (0:00:06.625) 0:02:01.920 ********** 2026-04-05 07:41:37.875594 | orchestrator | changed: [testbed-manager] => { 2026-04-05 07:41:37.875612 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:41:37.875631 | orchestrator | } 2026-04-05 07:41:37.875650 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:41:37.875667 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:41:37.875685 | orchestrator | } 2026-04-05 07:41:37.875705 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:41:37.875723 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:41:37.875742 | orchestrator | } 2026-04-05 07:41:37.875756 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:41:37.875768 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:41:37.875780 | orchestrator | } 2026-04-05 07:41:37.875793 | orchestrator | changed: [testbed-node-3] => { 2026-04-05 07:41:37.875811 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:41:37.875869 | orchestrator | } 2026-04-05 07:41:37.875896 | orchestrator | changed: [testbed-node-4] => { 2026-04-05 07:41:37.875915 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:41:37.875934 | orchestrator | } 2026-04-05 07:41:37.875953 | orchestrator | changed: [testbed-node-5] => { 2026-04-05 07:41:37.875972 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:41:37.875991 | orchestrator | } 2026-04-05 07:41:37.876013 | orchestrator | 2026-04-05 07:41:37.876037 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:41:37.876058 | orchestrator | Sunday 05 April 2026 07:41:37 +0000 (0:00:02.091) 0:02:04.011 ********** 2026-04-05 07:41:37.876089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:41:38.038297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.038394 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-05 07:41:38.038425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.038433 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:41:38.038442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.038451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.038473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.038483 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:41:38.038496 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.038503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:41:38.038511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.038518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.038525 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:41:38.038534 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:41:38.038545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.824358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.824472 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:41:38.824481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:41:38.824487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.824494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.824500 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:41:38.824506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:41:38.824511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.824517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.824523 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:41:38.824543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:41:38.824556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.824562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.824568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.824574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-05 07:41:38.824579 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:41:38.824585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-05 07:41:38.824591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-05 07:41:38.824600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-05 07:44:05.104265 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:44:05.104380 | orchestrator | 2026-04-05 07:44:05.104395 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 07:44:05.104407 | orchestrator | Sunday 05 April 2026 07:41:40 +0000 (0:00:03.138) 0:02:07.150 ********** 2026-04-05 07:44:05.104417 | orchestrator | 2026-04-05 07:44:05.104427 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 07:44:05.104437 | orchestrator | Sunday 05 April 2026 07:41:40 +0000 (0:00:00.461) 0:02:07.611 ********** 2026-04-05 07:44:05.104446 | orchestrator | 2026-04-05 07:44:05.104456 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 07:44:05.104481 | orchestrator | Sunday 05 April 2026 07:41:41 +0000 (0:00:00.440) 0:02:08.052 ********** 2026-04-05 07:44:05.104491 | orchestrator | 2026-04-05 07:44:05.104501 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 07:44:05.104511 | orchestrator | Sunday 05 April 2026 07:41:41 +0000 (0:00:00.455) 0:02:08.507 ********** 2026-04-05 07:44:05.104521 | orchestrator | 2026-04-05 07:44:05.104530 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 07:44:05.104540 | orchestrator | Sunday 05 April 2026 07:41:42 +0000 (0:00:00.687) 0:02:09.195 ********** 2026-04-05 07:44:05.104549 | orchestrator | 2026-04-05 07:44:05.104559 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 07:44:05.104568 | orchestrator | Sunday 05 April 2026 07:41:42 +0000 (0:00:00.433) 0:02:09.628 ********** 2026-04-05 07:44:05.104578 | orchestrator | 2026-04-05 07:44:05.104587 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-05 07:44:05.104597 | orchestrator | Sunday 05 April 2026 07:41:43 +0000 (0:00:00.539) 0:02:10.168 ********** 2026-04-05 07:44:05.104606 | orchestrator | 2026-04-05 07:44:05.104616 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-05 07:44:05.104625 | orchestrator | Sunday 05 April 2026 07:41:44 +0000 (0:00:00.826) 0:02:10.995 ********** 2026-04-05 07:44:05.104635 | orchestrator | changed: [testbed-manager] 2026-04-05 07:44:05.104644 | orchestrator | 2026-04-05 07:44:05.104654 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-05 07:44:05.104664 | orchestrator | Sunday 05 April 2026 07:42:08 +0000 (0:00:24.078) 0:02:35.073 ********** 2026-04-05 07:44:05.104673 | orchestrator | changed: [testbed-manager] 2026-04-05 07:44:05.104683 | orchestrator | changed: [testbed-node-5] 2026-04-05 07:44:05.104692 | orchestrator | changed: [testbed-node-4] 2026-04-05 07:44:05.104702 | orchestrator | changed: [testbed-node-3] 2026-04-05 07:44:05.104711 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:44:05.104721 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:44:05.104730 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:44:05.104740 | orchestrator | 2026-04-05 07:44:05.104750 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-05 07:44:05.104797 | orchestrator | Sunday 05 April 2026 07:42:26 +0000 (0:00:18.310) 0:02:53.384 ********** 2026-04-05 07:44:05.104812 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:44:05.104823 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:44:05.104835 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:44:05.104846 | orchestrator | 2026-04-05 07:44:05.104862 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-05 07:44:05.104880 | orchestrator | Sunday 05 April 2026 07:42:39 +0000 (0:00:13.237) 0:03:06.622 ********** 2026-04-05 07:44:05.104897 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:44:05.104913 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:44:05.104929 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:44:05.104946 | orchestrator | 2026-04-05 07:44:05.104962 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-05 07:44:05.104978 | orchestrator | Sunday 05 April 2026 07:42:53 +0000 (0:00:13.256) 0:03:19.878 ********** 2026-04-05 07:44:05.105028 | orchestrator | changed: [testbed-manager] 2026-04-05 07:44:05.105047 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:44:05.105064 | orchestrator | changed: [testbed-node-5] 2026-04-05 07:44:05.105080 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:44:05.105097 | orchestrator | changed: [testbed-node-3] 2026-04-05 07:44:05.105177 | orchestrator | changed: [testbed-node-4] 2026-04-05 07:44:05.105191 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:44:05.105200 | orchestrator | 2026-04-05 07:44:05.105210 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-05 07:44:05.105220 | orchestrator | Sunday 05 April 2026 07:43:10 +0000 (0:00:17.385) 0:03:37.264 ********** 2026-04-05 07:44:05.105229 | orchestrator | changed: [testbed-manager] 2026-04-05 07:44:05.105239 | orchestrator | 2026-04-05 07:44:05.105248 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-05 07:44:05.105258 | orchestrator | Sunday 05 April 2026 07:43:26 +0000 (0:00:15.670) 0:03:52.934 ********** 2026-04-05 07:44:05.105267 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:44:05.105277 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:44:05.105286 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:44:05.105296 | orchestrator | 2026-04-05 07:44:05.105305 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-05 07:44:05.105315 | orchestrator | Sunday 05 April 2026 07:43:39 +0000 (0:00:12.866) 0:04:05.801 ********** 2026-04-05 07:44:05.105324 | orchestrator | changed: [testbed-manager] 2026-04-05 07:44:05.105334 | orchestrator | 2026-04-05 07:44:05.105343 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-05 07:44:05.105353 | orchestrator | Sunday 05 April 2026 07:43:51 +0000 (0:00:12.775) 0:04:18.577 ********** 2026-04-05 07:44:05.105362 | orchestrator | changed: [testbed-node-3] 2026-04-05 07:44:05.105371 | orchestrator | changed: [testbed-node-4] 2026-04-05 07:44:05.105381 | orchestrator | changed: [testbed-node-5] 2026-04-05 07:44:05.105390 | orchestrator | 2026-04-05 07:44:05.105400 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:44:05.105410 | orchestrator | testbed-manager : ok=28  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 07:44:05.105440 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 07:44:05.105450 | orchestrator | testbed-node-1 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 07:44:05.105460 | orchestrator | testbed-node-2 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-05 07:44:05.105477 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 07:44:05.105487 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 07:44:05.105497 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 07:44:05.105507 | orchestrator | 2026-04-05 07:44:05.105516 | orchestrator | 2026-04-05 07:44:05.105526 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:44:05.105536 | orchestrator | Sunday 05 April 2026 07:44:04 +0000 (0:00:12.851) 0:04:31.429 ********** 2026-04-05 07:44:05.105547 | orchestrator | =============================================================================== 2026-04-05 07:44:05.105563 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.08s 2026-04-05 07:44:05.105580 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.77s 2026-04-05 07:44:05.105596 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 18.31s 2026-04-05 07:44:05.105629 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.39s 2026-04-05 07:44:05.105646 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.67s 2026-04-05 07:44:05.105662 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 13.26s 2026-04-05 07:44:05.105678 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.24s 2026-04-05 07:44:05.105696 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.87s 2026-04-05 07:44:05.105706 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.85s 2026-04-05 07:44:05.105715 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 12.78s 2026-04-05 07:44:05.105725 | orchestrator | prometheus : Gracefully stop Prometheus --------------------------------- 8.16s 2026-04-05 07:44:05.105734 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.06s 2026-04-05 07:44:05.105744 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.94s 2026-04-05 07:44:05.105753 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 6.62s 2026-04-05 07:44:05.105786 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.34s 2026-04-05 07:44:05.105796 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.55s 2026-04-05 07:44:05.105805 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.13s 2026-04-05 07:44:05.105815 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 4.06s 2026-04-05 07:44:05.105824 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.88s 2026-04-05 07:44:05.105834 | orchestrator | prometheus : Flush handlers --------------------------------------------- 3.85s 2026-04-05 07:44:06.623657 | orchestrator | 2026-04-05 07:44:06 | INFO  | Prepare task for execution of grafana. 2026-04-05 07:44:06.688981 | orchestrator | 2026-04-05 07:44:06 | INFO  | Task 0ed7f6b6-a165-470e-b5c8-920448d24021 (grafana) was prepared for execution. 2026-04-05 07:44:06.689092 | orchestrator | 2026-04-05 07:44:06 | INFO  | It takes a moment until task 0ed7f6b6-a165-470e-b5c8-920448d24021 (grafana) has been started and output is visible here. 2026-04-05 07:44:20.392098 | orchestrator | 2026-04-05 07:44:20.392204 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:44:20.392221 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 07:44:20.392233 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 07:44:20.392256 | orchestrator | 2026-04-05 07:44:20.392266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:44:20.392277 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 07:44:20.392288 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 07:44:20.392309 | orchestrator | Sunday 05 April 2026 07:44:11 +0000 (0:00:01.131) 0:00:01.131 ********** 2026-04-05 07:44:20.392320 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:44:20.392331 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:44:20.392342 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:44:20.392352 | orchestrator | 2026-04-05 07:44:20.392363 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:44:20.392374 | orchestrator | Sunday 05 April 2026 07:44:11 +0000 (0:00:00.703) 0:00:01.835 ********** 2026-04-05 07:44:20.392384 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-05 07:44:20.392395 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-05 07:44:20.392433 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-05 07:44:20.392444 | orchestrator | 2026-04-05 07:44:20.392455 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-05 07:44:20.392466 | orchestrator | 2026-04-05 07:44:20.392476 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 07:44:20.392487 | orchestrator | Sunday 05 April 2026 07:44:12 +0000 (0:00:00.768) 0:00:02.603 ********** 2026-04-05 07:44:20.392514 | orchestrator | included: /ansible/roles/grafana/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:44:20.392526 | orchestrator | 2026-04-05 07:44:20.392536 | orchestrator | TASK [grafana : Checking if Grafana container needs upgrading] ***************** 2026-04-05 07:44:20.392547 | orchestrator | Sunday 05 April 2026 07:44:13 +0000 (0:00:01.161) 0:00:03.765 ********** 2026-04-05 07:44:20.392558 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:44:20.392568 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:44:20.392579 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:44:20.392589 | orchestrator | 2026-04-05 07:44:20.392600 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-05 07:44:20.392611 | orchestrator | Sunday 05 April 2026 07:44:15 +0000 (0:00:01.962) 0:00:05.728 ********** 2026-04-05 07:44:20.392625 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:20.392643 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:20.392675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:20.392689 | orchestrator | 2026-04-05 07:44:20.392701 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-05 07:44:20.392715 | orchestrator | Sunday 05 April 2026 07:44:16 +0000 (0:00:00.856) 0:00:06.584 ********** 2026-04-05 07:44:20.392728 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:44:20.392741 | orchestrator | 2026-04-05 07:44:20.392783 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-05 07:44:20.392818 | orchestrator | Sunday 05 April 2026 07:44:17 +0000 (0:00:01.209) 0:00:07.794 ********** 2026-04-05 07:44:20.392838 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:44:20.392858 | orchestrator | 2026-04-05 07:44:20.392871 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-05 07:44:20.392884 | orchestrator | Sunday 05 April 2026 07:44:18 +0000 (0:00:01.156) 0:00:08.951 ********** 2026-04-05 07:44:20.392903 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:20.392918 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:20.392931 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:20.392944 | orchestrator | 2026-04-05 07:44:20.392956 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-05 07:44:20.392970 | orchestrator | Sunday 05 April 2026 07:44:20 +0000 (0:00:01.233) 0:00:10.184 ********** 2026-04-05 07:44:20.392984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:20.393005 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:44:24.041596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:24.041699 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:44:24.041735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:24.041797 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:44:24.041812 | orchestrator | 2026-04-05 07:44:24.041824 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-05 07:44:24.041836 | orchestrator | Sunday 05 April 2026 07:44:20 +0000 (0:00:00.497) 0:00:10.682 ********** 2026-04-05 07:44:24.041847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:24.041859 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:44:24.041870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:24.041881 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:44:24.041910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:24.041945 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:44:24.041957 | orchestrator | 2026-04-05 07:44:24.041968 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-05 07:44:24.041979 | orchestrator | Sunday 05 April 2026 07:44:21 +0000 (0:00:00.914) 0:00:11.596 ********** 2026-04-05 07:44:24.041990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:24.042009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:24.042081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:24.042094 | orchestrator | 2026-04-05 07:44:24.042105 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-05 07:44:24.042116 | orchestrator | Sunday 05 April 2026 07:44:22 +0000 (0:00:01.316) 0:00:12.912 ********** 2026-04-05 07:44:24.042127 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:24.042157 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:33.111221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:33.111332 | orchestrator | 2026-04-05 07:44:33.111351 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-05 07:44:33.111364 | orchestrator | Sunday 05 April 2026 07:44:24 +0000 (0:00:01.744) 0:00:14.657 ********** 2026-04-05 07:44:33.111376 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:44:33.111387 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:44:33.111398 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:44:33.111409 | orchestrator | 2026-04-05 07:44:33.111420 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-05 07:44:33.111447 | orchestrator | Sunday 05 April 2026 07:44:24 +0000 (0:00:00.333) 0:00:14.990 ********** 2026-04-05 07:44:33.111459 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 07:44:33.111470 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 07:44:33.111481 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-05 07:44:33.111492 | orchestrator | 2026-04-05 07:44:33.111502 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-05 07:44:33.111513 | orchestrator | Sunday 05 April 2026 07:44:26 +0000 (0:00:01.332) 0:00:16.322 ********** 2026-04-05 07:44:33.111524 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 07:44:33.111535 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 07:44:33.111546 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-05 07:44:33.111556 | orchestrator | 2026-04-05 07:44:33.111567 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-05 07:44:33.111578 | orchestrator | Sunday 05 April 2026 07:44:27 +0000 (0:00:01.272) 0:00:17.595 ********** 2026-04-05 07:44:33.111589 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:44:33.111600 | orchestrator | 2026-04-05 07:44:33.111610 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-05 07:44:33.111621 | orchestrator | Sunday 05 April 2026 07:44:28 +0000 (0:00:00.770) 0:00:18.365 ********** 2026-04-05 07:44:33.111657 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:44:33.111669 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:44:33.111681 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:44:33.111692 | orchestrator | 2026-04-05 07:44:33.111703 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-05 07:44:33.111713 | orchestrator | Sunday 05 April 2026 07:44:29 +0000 (0:00:00.967) 0:00:19.333 ********** 2026-04-05 07:44:33.111724 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:44:33.111737 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:44:33.111783 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:44:33.111804 | orchestrator | 2026-04-05 07:44:33.111822 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-05 07:44:33.111840 | orchestrator | Sunday 05 April 2026 07:44:30 +0000 (0:00:01.636) 0:00:20.970 ********** 2026-04-05 07:44:33.111860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:33.111906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:33.111929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:44:33.111950 | orchestrator | 2026-04-05 07:44:33.111978 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-05 07:44:33.111999 | orchestrator | Sunday 05 April 2026 07:44:32 +0000 (0:00:01.250) 0:00:22.220 ********** 2026-04-05 07:44:33.112018 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:44:33.112037 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:44:33.112056 | orchestrator | } 2026-04-05 07:44:33.112075 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:44:33.112094 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:44:33.112114 | orchestrator | } 2026-04-05 07:44:33.112133 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:44:33.112152 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:44:33.112186 | orchestrator | } 2026-04-05 07:44:33.112205 | orchestrator | 2026-04-05 07:44:33.112223 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:44:33.112242 | orchestrator | Sunday 05 April 2026 07:44:32 +0000 (0:00:00.348) 0:00:22.569 ********** 2026-04-05 07:44:33.112262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:33.112283 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:44:33.112301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:44:33.112322 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:44:33.112354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:46:19.770272 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:46:19.770384 | orchestrator | 2026-04-05 07:46:19.770400 | orchestrator | TASK [grafana : Stopping all Grafana instances but the first node] ************* 2026-04-05 07:46:19.770413 | orchestrator | Sunday 05 April 2026 07:44:33 +0000 (0:00:00.858) 0:00:23.428 ********** 2026-04-05 07:46:19.770425 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:46:19.770435 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:46:19.770446 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:46:19.770457 | orchestrator | 2026-04-05 07:46:19.770468 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 07:46:19.770479 | orchestrator | Sunday 05 April 2026 07:44:39 +0000 (0:00:05.983) 0:00:29.411 ********** 2026-04-05 07:46:19.770489 | orchestrator | 2026-04-05 07:46:19.770500 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 07:46:19.770510 | orchestrator | Sunday 05 April 2026 07:44:39 +0000 (0:00:00.074) 0:00:29.486 ********** 2026-04-05 07:46:19.770521 | orchestrator | 2026-04-05 07:46:19.770531 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-05 07:46:19.770542 | orchestrator | Sunday 05 April 2026 07:44:39 +0000 (0:00:00.071) 0:00:29.558 ********** 2026-04-05 07:46:19.770578 | orchestrator | 2026-04-05 07:46:19.770589 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-05 07:46:19.770600 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-05 07:46:19.770612 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-05 07:46:19.770648 | orchestrator | Sunday 05 April 2026 07:44:39 +0000 (0:00:00.245) 0:00:29.803 ********** 2026-04-05 07:46:19.770658 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:46:19.770669 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:46:19.770679 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:46:19.770690 | orchestrator | 2026-04-05 07:46:19.770752 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-05 07:46:19.770764 | orchestrator | Sunday 05 April 2026 07:45:17 +0000 (0:00:37.900) 0:01:07.703 ********** 2026-04-05 07:46:19.770774 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:46:19.770785 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:46:19.770795 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-05 07:46:19.770808 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-05 07:46:19.770821 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:46:19.770835 | orchestrator | 2026-04-05 07:46:19.770848 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-05 07:46:19.770860 | orchestrator | Sunday 05 April 2026 07:45:44 +0000 (0:00:26.387) 0:01:34.091 ********** 2026-04-05 07:46:19.770872 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:46:19.770885 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:46:19.770897 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:46:19.770910 | orchestrator | 2026-04-05 07:46:19.770923 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:46:19.770937 | orchestrator | testbed-node-0 : ok=19  changed=6  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:46:19.770950 | orchestrator | testbed-node-1 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:46:19.770963 | orchestrator | testbed-node-2 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:46:19.770975 | orchestrator | 2026-04-05 07:46:19.770988 | orchestrator | 2026-04-05 07:46:19.771001 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:46:19.771013 | orchestrator | Sunday 05 April 2026 07:46:19 +0000 (0:00:35.401) 0:02:09.493 ********** 2026-04-05 07:46:19.771026 | orchestrator | =============================================================================== 2026-04-05 07:46:19.771039 | orchestrator | grafana : Restart first grafana container ------------------------------ 37.90s 2026-04-05 07:46:19.771052 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.40s 2026-04-05 07:46:19.771064 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.39s 2026-04-05 07:46:19.771077 | orchestrator | grafana : Stopping all Grafana instances but the first node ------------- 5.98s 2026-04-05 07:46:19.771089 | orchestrator | grafana : Checking if Grafana container needs upgrading ----------------- 1.96s 2026-04-05 07:46:19.771102 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.74s 2026-04-05 07:46:19.771114 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.64s 2026-04-05 07:46:19.771126 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.33s 2026-04-05 07:46:19.771138 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2026-04-05 07:46:19.771151 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.27s 2026-04-05 07:46:19.771172 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.25s 2026-04-05 07:46:19.771183 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.23s 2026-04-05 07:46:19.771194 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.21s 2026-04-05 07:46:19.771204 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.16s 2026-04-05 07:46:19.771231 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.16s 2026-04-05 07:46:19.771243 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.97s 2026-04-05 07:46:19.771253 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.91s 2026-04-05 07:46:19.771264 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.86s 2026-04-05 07:46:19.771275 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.86s 2026-04-05 07:46:19.771285 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.77s 2026-04-05 07:46:19.950464 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/510-clusterapi.sh 2026-04-05 07:46:19.958992 | orchestrator | + set -e 2026-04-05 07:46:19.959078 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 07:46:19.959099 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 07:46:19.959112 | orchestrator | ++ INTERACTIVE=false 2026-04-05 07:46:19.959123 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 07:46:19.959146 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 07:46:19.959158 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 07:46:19.960554 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 07:46:19.964098 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 07:46:19.964135 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 07:46:19.965225 | orchestrator | ++ semver 10.0.0 8.0.0 2026-04-05 07:46:20.037899 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-05 07:46:20.038012 | orchestrator | + osism apply clusterapi 2026-04-05 07:46:21.391305 | orchestrator | 2026-04-05 07:46:21 | INFO  | Prepare task for execution of clusterapi. 2026-04-05 07:46:21.456977 | orchestrator | 2026-04-05 07:46:21 | INFO  | Task 0e3500f6-7671-43e5-81d1-896b3c492f45 (clusterapi) was prepared for execution. 2026-04-05 07:46:21.457068 | orchestrator | 2026-04-05 07:46:21 | INFO  | It takes a moment until task 0e3500f6-7671-43e5-81d1-896b3c492f45 (clusterapi) has been started and output is visible here. 2026-04-05 07:47:39.504288 | orchestrator | 2026-04-05 07:47:39.504381 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-05 07:47:39.504391 | orchestrator | 2026-04-05 07:47:39.504397 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-05 07:47:39.504403 | orchestrator | Sunday 05 April 2026 07:46:27 +0000 (0:00:01.541) 0:00:01.541 ********** 2026-04-05 07:47:39.504409 | orchestrator | included: cert_manager for testbed-manager 2026-04-05 07:47:39.504415 | orchestrator | 2026-04-05 07:47:39.504420 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-05 07:47:39.504426 | orchestrator | Sunday 05 April 2026 07:46:28 +0000 (0:00:01.847) 0:00:03.388 ********** 2026-04-05 07:47:39.504440 | orchestrator | ok: [testbed-manager] 2026-04-05 07:47:39.504445 | orchestrator | 2026-04-05 07:47:39.504450 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-05 07:47:39.504455 | orchestrator | Sunday 05 April 2026 07:46:33 +0000 (0:00:04.481) 0:00:07.870 ********** 2026-04-05 07:47:39.504460 | orchestrator | ok: [testbed-manager] 2026-04-05 07:47:39.504465 | orchestrator | 2026-04-05 07:47:39.504469 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-05 07:47:39.504474 | orchestrator | 2026-04-05 07:47:39.504479 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-05 07:47:39.504484 | orchestrator | Sunday 05 April 2026 07:46:38 +0000 (0:00:05.173) 0:00:13.044 ********** 2026-04-05 07:47:39.504489 | orchestrator | ok: [testbed-manager] 2026-04-05 07:47:39.504515 | orchestrator | 2026-04-05 07:47:39.504520 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-05 07:47:39.504525 | orchestrator | Sunday 05 April 2026 07:46:41 +0000 (0:00:02.724) 0:00:15.768 ********** 2026-04-05 07:47:39.504530 | orchestrator | ok: [testbed-manager] 2026-04-05 07:47:39.504534 | orchestrator | 2026-04-05 07:47:39.504539 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-05 07:47:39.504544 | orchestrator | Sunday 05 April 2026 07:46:42 +0000 (0:00:01.135) 0:00:16.904 ********** 2026-04-05 07:47:39.504549 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:47:39.504554 | orchestrator | 2026-04-05 07:47:39.504559 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-05 07:47:39.504563 | orchestrator | Sunday 05 April 2026 07:46:43 +0000 (0:00:01.116) 0:00:18.021 ********** 2026-04-05 07:47:39.504568 | orchestrator | ok: [testbed-manager] 2026-04-05 07:47:39.504573 | orchestrator | 2026-04-05 07:47:39.504577 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-05 07:47:39.504582 | orchestrator | Sunday 05 April 2026 07:47:35 +0000 (0:00:51.977) 0:01:09.999 ********** 2026-04-05 07:47:39.504587 | orchestrator | changed: [testbed-manager] 2026-04-05 07:47:39.504592 | orchestrator | 2026-04-05 07:47:39.504596 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:47:39.504602 | orchestrator | testbed-manager : ok=7  changed=1  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-05 07:47:39.504607 | orchestrator | 2026-04-05 07:47:39.504612 | orchestrator | 2026-04-05 07:47:39.504617 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:47:39.504622 | orchestrator | Sunday 05 April 2026 07:47:39 +0000 (0:00:03.602) 0:01:13.602 ********** 2026-04-05 07:47:39.504626 | orchestrator | =============================================================================== 2026-04-05 07:47:39.504631 | orchestrator | Upgrade the CAPI management cluster ------------------------------------ 51.98s 2026-04-05 07:47:39.504636 | orchestrator | cert_manager : Deploy cert-manager -------------------------------------- 5.17s 2026-04-05 07:47:39.504647 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 4.48s 2026-04-05 07:47:39.504652 | orchestrator | Install openstack-resource-controller ----------------------------------- 3.60s 2026-04-05 07:47:39.504657 | orchestrator | Get capi-system namespace phase ----------------------------------------- 2.72s 2026-04-05 07:47:39.504661 | orchestrator | Include cert_manager role ----------------------------------------------- 1.85s 2026-04-05 07:47:39.504703 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 1.14s 2026-04-05 07:47:39.504709 | orchestrator | Initialize the CAPI management cluster ---------------------------------- 1.12s 2026-04-05 07:47:39.694579 | orchestrator | + osism apply -a upgrade magnum 2026-04-05 07:47:41.001785 | orchestrator | 2026-04-05 07:47:41 | INFO  | Prepare task for execution of magnum. 2026-04-05 07:47:41.069024 | orchestrator | 2026-04-05 07:47:41 | INFO  | Task 05c36a93-f310-4307-a545-299964530b89 (magnum) was prepared for execution. 2026-04-05 07:47:41.069103 | orchestrator | 2026-04-05 07:47:41 | INFO  | It takes a moment until task 05c36a93-f310-4307-a545-299964530b89 (magnum) has been started and output is visible here. 2026-04-05 07:48:01.105622 | orchestrator | 2026-04-05 07:48:01.105760 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:48:01.105777 | orchestrator | 2026-04-05 07:48:01.105790 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:48:01.105801 | orchestrator | Sunday 05 April 2026 07:47:45 +0000 (0:00:01.638) 0:00:01.638 ********** 2026-04-05 07:48:01.105812 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:48:01.105824 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:48:01.105835 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:48:01.105846 | orchestrator | 2026-04-05 07:48:01.105864 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:48:01.105916 | orchestrator | Sunday 05 April 2026 07:47:47 +0000 (0:00:01.720) 0:00:03.359 ********** 2026-04-05 07:48:01.105956 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-05 07:48:01.105975 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-05 07:48:01.105995 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-05 07:48:01.106015 | orchestrator | 2026-04-05 07:48:01.106102 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-05 07:48:01.106113 | orchestrator | 2026-04-05 07:48:01.106124 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 07:48:01.106135 | orchestrator | Sunday 05 April 2026 07:47:49 +0000 (0:00:01.944) 0:00:05.304 ********** 2026-04-05 07:48:01.106146 | orchestrator | included: /ansible/roles/magnum/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:48:01.106158 | orchestrator | 2026-04-05 07:48:01.106172 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-05 07:48:01.106184 | orchestrator | Sunday 05 April 2026 07:47:52 +0000 (0:00:02.537) 0:00:07.841 ********** 2026-04-05 07:48:01.106204 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:01.106223 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:01.106259 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:01.106291 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:01.106305 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:01.106316 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:01.106327 | orchestrator | 2026-04-05 07:48:01.106338 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-05 07:48:01.106349 | orchestrator | Sunday 05 April 2026 07:47:55 +0000 (0:00:02.842) 0:00:10.684 ********** 2026-04-05 07:48:01.106360 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:48:01.106371 | orchestrator | 2026-04-05 07:48:01.106382 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-05 07:48:01.106392 | orchestrator | Sunday 05 April 2026 07:47:56 +0000 (0:00:01.120) 0:00:11.805 ********** 2026-04-05 07:48:01.106403 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:48:01.106413 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:48:01.106424 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:48:01.106435 | orchestrator | 2026-04-05 07:48:01.106445 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-05 07:48:01.106456 | orchestrator | Sunday 05 April 2026 07:47:57 +0000 (0:00:01.330) 0:00:13.135 ********** 2026-04-05 07:48:01.106466 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-05 07:48:01.106477 | orchestrator | 2026-04-05 07:48:01.106488 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-05 07:48:01.106498 | orchestrator | Sunday 05 April 2026 07:47:59 +0000 (0:00:02.235) 0:00:15.371 ********** 2026-04-05 07:48:01.106518 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:08.558393 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:08.558508 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:08.558526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:08.558540 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:08.558603 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:08.558626 | orchestrator | 2026-04-05 07:48:08.558718 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-05 07:48:08.558744 | orchestrator | Sunday 05 April 2026 07:48:03 +0000 (0:00:03.649) 0:00:19.020 ********** 2026-04-05 07:48:08.558761 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:48:08.558782 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:48:08.558803 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:48:08.558825 | orchestrator | 2026-04-05 07:48:08.558845 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-05 07:48:08.558863 | orchestrator | Sunday 05 April 2026 07:48:04 +0000 (0:00:01.344) 0:00:20.364 ********** 2026-04-05 07:48:08.558874 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:48:08.558885 | orchestrator | 2026-04-05 07:48:08.558896 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-05 07:48:08.558910 | orchestrator | Sunday 05 April 2026 07:48:06 +0000 (0:00:01.870) 0:00:22.235 ********** 2026-04-05 07:48:08.558925 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:08.558941 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:08.558968 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:08.559000 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:12.364387 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:12.364487 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:12.364501 | orchestrator | 2026-04-05 07:48:12.364511 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-05 07:48:12.364519 | orchestrator | Sunday 05 April 2026 07:48:09 +0000 (0:00:03.405) 0:00:25.641 ********** 2026-04-05 07:48:12.364551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:12.364560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:12.364568 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:48:12.364606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:12.364615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:12.364622 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:48:12.364628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:12.364642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:12.364648 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:48:12.364715 | orchestrator | 2026-04-05 07:48:12.364724 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-05 07:48:12.364732 | orchestrator | Sunday 05 April 2026 07:48:11 +0000 (0:00:01.991) 0:00:27.632 ********** 2026-04-05 07:48:12.364752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:16.393834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:16.393947 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:48:16.393967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:16.394011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:16.394139 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:48:16.394180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:16.394230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:16.394252 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:48:16.394271 | orchestrator | 2026-04-05 07:48:16.394292 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-05 07:48:16.394309 | orchestrator | Sunday 05 April 2026 07:48:14 +0000 (0:00:02.141) 0:00:29.774 ********** 2026-04-05 07:48:16.394324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:16.394351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:16.394372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:16.394396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:24.270271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:24.270404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:24.270421 | orchestrator | 2026-04-05 07:48:24.270435 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-05 07:48:24.270447 | orchestrator | Sunday 05 April 2026 07:48:17 +0000 (0:00:03.569) 0:00:33.343 ********** 2026-04-05 07:48:24.270462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:24.270492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:24.270526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:24.270547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:24.270559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:24.270571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:24.270582 | orchestrator | 2026-04-05 07:48:24.270593 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-05 07:48:24.270609 | orchestrator | Sunday 05 April 2026 07:48:23 +0000 (0:00:06.211) 0:00:39.555 ********** 2026-04-05 07:48:24.270629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:28.594974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:28.595076 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:48:28.595097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:28.595112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:28.595123 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:48:28.595152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:28.595183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:28.595218 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:48:28.595230 | orchestrator | 2026-04-05 07:48:28.595242 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-05 07:48:28.595254 | orchestrator | Sunday 05 April 2026 07:48:26 +0000 (0:00:02.331) 0:00:41.887 ********** 2026-04-05 07:48:28.595266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:28.595278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:28.595296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-05 07:48:28.595325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:56.949845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:56.949958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-05 07:48:56.949976 | orchestrator | 2026-04-05 07:48:56.949990 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-05 07:48:56.950003 | orchestrator | Sunday 05 April 2026 07:48:29 +0000 (0:00:03.616) 0:00:45.504 ********** 2026-04-05 07:48:56.950015 | orchestrator | changed: [testbed-node-0] => { 2026-04-05 07:48:56.950091 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:48:56.950103 | orchestrator | } 2026-04-05 07:48:56.950115 | orchestrator | changed: [testbed-node-1] => { 2026-04-05 07:48:56.950125 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:48:56.950136 | orchestrator | } 2026-04-05 07:48:56.950147 | orchestrator | changed: [testbed-node-2] => { 2026-04-05 07:48:56.950158 | orchestrator |  "msg": "Notifying handlers" 2026-04-05 07:48:56.950169 | orchestrator | } 2026-04-05 07:48:56.950180 | orchestrator | 2026-04-05 07:48:56.950206 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-05 07:48:56.950217 | orchestrator | Sunday 05 April 2026 07:48:31 +0000 (0:00:01.462) 0:00:46.966 ********** 2026-04-05 07:48:56.950259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:56.950300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:56.950313 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:48:56.950345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:56.950359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:56.950373 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:48:56.950393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-05 07:48:56.950415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-05 07:48:56.950428 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:48:56.950440 | orchestrator | 2026-04-05 07:48:56.950453 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-05 07:48:56.950466 | orchestrator | Sunday 05 April 2026 07:48:33 +0000 (0:00:02.169) 0:00:49.135 ********** 2026-04-05 07:48:56.950479 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:48:56.950492 | orchestrator | 2026-04-05 07:48:56.950504 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 07:48:56.950517 | orchestrator | Sunday 05 April 2026 07:48:56 +0000 (0:00:23.056) 0:01:12.192 ********** 2026-04-05 07:48:56.950529 | orchestrator | 2026-04-05 07:48:56.950542 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 07:48:56.950562 | orchestrator | Sunday 05 April 2026 07:48:56 +0000 (0:00:00.421) 0:01:12.614 ********** 2026-04-05 07:49:44.611287 | orchestrator | 2026-04-05 07:49:44.611388 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-05 07:49:44.611401 | orchestrator | Sunday 05 April 2026 07:48:57 +0000 (0:00:00.447) 0:01:13.061 ********** 2026-04-05 07:49:44.611410 | orchestrator | 2026-04-05 07:49:44.611419 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-05 07:49:44.611427 | orchestrator | Sunday 05 April 2026 07:48:58 +0000 (0:00:00.765) 0:01:13.827 ********** 2026-04-05 07:49:44.611435 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:49:44.611444 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:49:44.611452 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:49:44.611460 | orchestrator | 2026-04-05 07:49:44.611468 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-05 07:49:44.611476 | orchestrator | Sunday 05 April 2026 07:49:20 +0000 (0:00:22.327) 0:01:36.154 ********** 2026-04-05 07:49:44.611484 | orchestrator | changed: [testbed-node-2] 2026-04-05 07:49:44.611491 | orchestrator | changed: [testbed-node-1] 2026-04-05 07:49:44.611499 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:49:44.611507 | orchestrator | 2026-04-05 07:49:44.611515 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:49:44.611524 | orchestrator | testbed-node-0 : ok=16  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-05 07:49:44.611533 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:49:44.611541 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-05 07:49:44.611549 | orchestrator | 2026-04-05 07:49:44.611557 | orchestrator | 2026-04-05 07:49:44.611565 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:49:44.611573 | orchestrator | Sunday 05 April 2026 07:49:44 +0000 (0:00:23.822) 0:01:59.976 ********** 2026-04-05 07:49:44.611604 | orchestrator | =============================================================================== 2026-04-05 07:49:44.611613 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 23.82s 2026-04-05 07:49:44.611681 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 23.06s 2026-04-05 07:49:44.611691 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.33s 2026-04-05 07:49:44.611699 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.21s 2026-04-05 07:49:44.611707 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.65s 2026-04-05 07:49:44.611715 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.62s 2026-04-05 07:49:44.611723 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.57s 2026-04-05 07:49:44.611730 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.41s 2026-04-05 07:49:44.611738 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.84s 2026-04-05 07:49:44.611746 | orchestrator | magnum : include_tasks -------------------------------------------------- 2.54s 2026-04-05 07:49:44.611754 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.33s 2026-04-05 07:49:44.611761 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.24s 2026-04-05 07:49:44.611769 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.17s 2026-04-05 07:49:44.611777 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.14s 2026-04-05 07:49:44.611797 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 1.99s 2026-04-05 07:49:44.611806 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.94s 2026-04-05 07:49:44.611814 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.87s 2026-04-05 07:49:44.611822 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.72s 2026-04-05 07:49:44.611829 | orchestrator | magnum : Flush handlers ------------------------------------------------- 1.63s 2026-04-05 07:49:44.611837 | orchestrator | service-check-containers : magnum | Notify handlers to restart containers --- 1.46s 2026-04-05 07:49:45.442524 | orchestrator | ok: Runtime: 3:22:11.030643 2026-04-05 07:49:45.870322 | 2026-04-05 07:49:45.870466 | TASK [Bootstrap services] 2026-04-05 07:49:46.408502 | orchestrator | skipping: Conditional result was False 2026-04-05 07:49:46.434706 | 2026-04-05 07:49:46.434940 | TASK [Run checks after the upgrade] 2026-04-05 07:49:47.123850 | orchestrator | + set -e 2026-04-05 07:49:47.124004 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 07:49:47.124019 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 07:49:47.124033 | orchestrator | ++ INTERACTIVE=false 2026-04-05 07:49:47.124042 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 07:49:47.124050 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 07:49:47.124060 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 07:49:47.124996 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 07:49:47.131766 | orchestrator | 2026-04-05 07:49:47.131794 | orchestrator | # CHECK 2026-04-05 07:49:47.131801 | orchestrator | 2026-04-05 07:49:47.131808 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 07:49:47.131818 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 07:49:47.131825 | orchestrator | + echo 2026-04-05 07:49:47.131832 | orchestrator | + echo '# CHECK' 2026-04-05 07:49:47.131839 | orchestrator | + echo 2026-04-05 07:49:47.131853 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 07:49:47.132990 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-05 07:49:47.186490 | orchestrator | 2026-04-05 07:49:47.186562 | orchestrator | ## Containers @ testbed-manager 2026-04-05 07:49:47.186569 | orchestrator | 2026-04-05 07:49:47.186576 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 07:49:47.186581 | orchestrator | + echo 2026-04-05 07:49:47.186585 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-05 07:49:47.186590 | orchestrator | + echo 2026-04-05 07:49:47.186594 | orchestrator | + osism container testbed-manager ps 2026-04-05 07:49:48.632539 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 07:49:48.632719 | orchestrator | 51282ab3da80 registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328 "dumb-init --single-…" 5 minutes ago Up 5 minutes prometheus_blackbox_exporter 2026-04-05 07:49:48.632752 | orchestrator | b0076aaf74c5 registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_alertmanager 2026-04-05 07:49:48.632766 | orchestrator | 6e69557307e2 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-05 07:49:48.632777 | orchestrator | 9de5f281805d registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-05 07:49:48.633255 | orchestrator | f2a1e86ea5a0 registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_server 2026-04-05 07:49:48.633275 | orchestrator | 39583214b43b registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-05 07:49:48.633301 | orchestrator | ddd488f9f503 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-05 07:49:48.633314 | orchestrator | a6ba207e258a registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-05 07:49:48.633354 | orchestrator | 41a0a029b81b registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 3 hours ago Up 3 hours openstackclient 2026-04-05 07:49:48.633366 | orchestrator | 738c7748cd2b registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" 3 hours ago Up 3 hours (healthy) manager-inventory_reconciler-1 2026-04-05 07:49:48.633378 | orchestrator | 9a7d2646ed28 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" 3 hours ago Up 3 hours (healthy) osismclient 2026-04-05 07:49:48.633389 | orchestrator | 42f263976ac2 registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-ansible 2026-04-05 07:49:48.633401 | orchestrator | 154094a39e2c registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-kubernetes 2026-04-05 07:49:48.633418 | orchestrator | b98f6835e5e2 registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) kolla-ansible 2026-04-05 07:49:48.633430 | orchestrator | 80706227cd39 registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) ceph-ansible 2026-04-05 07:49:48.633442 | orchestrator | 0f4ac601d28e registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-openstack-1 2026-04-05 07:49:48.633453 | orchestrator | 7d1f842deb54 registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" 3 hours ago Up 3 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-05 07:49:48.633465 | orchestrator | 26a4ffa59606 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up About an hour (healthy) manager-listener-1 2026-04-05 07:49:48.633476 | orchestrator | e2ffdfe944c8 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-beat-1 2026-04-05 07:49:48.633500 | orchestrator | e7017b020224 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-flower-1 2026-04-05 07:49:48.633512 | orchestrator | f6fc329ebd49 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-05 07:49:48.633523 | orchestrator | d5f6fc4283fb registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 5 hours ago Up 5 hours cephclient 2026-04-05 07:49:48.633542 | orchestrator | 49e62b023122 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 5 hours ago Up 5 hours (healthy) 80/tcp phpmyadmin 2026-04-05 07:49:48.633554 | orchestrator | 2d4360393121 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 5 hours ago Up 5 hours (healthy) 8080/tcp homer 2026-04-05 07:49:48.633566 | orchestrator | 6e62d5fc96c4 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 5 hours ago Up 5 hours 80/tcp cgit 2026-04-05 07:49:48.633577 | orchestrator | 8aa92a85ce00 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 6 hours ago Up 6 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-05 07:49:48.633589 | orchestrator | 3ecc77353e81 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 6 hours ago Up 3 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-05 07:49:48.633605 | orchestrator | f00f531a1a6c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 6 hours ago Up 3 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-05 07:49:48.633646 | orchestrator | 40c0614e15fd registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 6 hours ago Up 3 hours (healthy) 6379/tcp manager-redis-1 2026-04-05 07:49:48.633658 | orchestrator | 133966d7c834 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 6 hours ago Up 6 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-05 07:49:48.780845 | orchestrator | 2026-04-05 07:49:48.780956 | orchestrator | ## Images @ testbed-manager 2026-04-05 07:49:48.780975 | orchestrator | 2026-04-05 07:49:48.780988 | orchestrator | + echo 2026-04-05 07:49:48.781000 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-05 07:49:48.781013 | orchestrator | + echo 2026-04-05 07:49:48.781024 | orchestrator | + osism container testbed-manager images 2026-04-05 07:49:50.222902 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 07:49:50.223010 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 99f2424badcb 4 hours ago 213MB 2026-04-05 07:49:50.223027 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 2fd96e7e9166 28 hours ago 239MB 2026-04-05 07:49:50.223041 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20260328.0 38f6ca42e9a0 5 days ago 635MB 2026-04-05 07:49:50.223052 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 7 days ago 590MB 2026-04-05 07:49:50.223063 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 7 days ago 683MB 2026-04-05 07:49:50.223074 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 7 days ago 277MB 2026-04-05 07:49:50.223085 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter 0.25.0.20260328 1bf017fd7bf3 7 days ago 319MB 2026-04-05 07:49:50.223128 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager 0.28.1.20260328 d1986023a383 7 days ago 415MB 2026-04-05 07:49:50.223140 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 7 days ago 368MB 2026-04-05 07:49:50.223151 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-server 3.2.1.20260328 4f5732d5eb69 7 days ago 860MB 2026-04-05 07:49:50.223162 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 7 days ago 317MB 2026-04-05 07:49:50.223173 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20260322.0 3e18c5de9bc5 13 days ago 634MB 2026-04-05 07:49:50.223185 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20260322.0 c68c1f5728ae 13 days ago 1.24GB 2026-04-05 07:49:50.223199 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20260322.0 f6e7e0d58bb1 13 days ago 585MB 2026-04-05 07:49:50.223219 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20260322.0 9806642932fd 13 days ago 357MB 2026-04-05 07:49:50.223243 | orchestrator | registry.osism.tech/osism/osism 0.20260320.0 5d0420989a40 2 weeks ago 408MB 2026-04-05 07:49:50.223271 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20260320.0 80b833af5991 2 weeks ago 232MB 2026-04-05 07:49:50.223289 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-05 07:49:50.223320 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-05 07:49:50.223336 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-05 07:49:50.223355 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 07:49:50.223378 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 07:49:50.223402 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 07:49:50.223434 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-05 07:49:50.223454 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 07:49:50.223474 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-05 07:49:50.223494 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-05 07:49:50.223514 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 07:49:50.223527 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-05 07:49:50.223538 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-05 07:49:50.223549 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-05 07:49:50.223582 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-05 07:49:50.223594 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-05 07:49:50.223605 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-05 07:49:50.223676 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-05 07:49:50.223689 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-05 07:49:50.223701 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-05 07:49:50.223712 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-05 07:49:50.223722 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-05 07:49:50.223733 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-05 07:49:50.223744 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-05 07:49:50.388295 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 07:49:50.389203 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-05 07:49:50.447696 | orchestrator | 2026-04-05 07:49:50.447785 | orchestrator | ## Containers @ testbed-node-0 2026-04-05 07:49:50.447794 | orchestrator | 2026-04-05 07:49:50.447801 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 07:49:50.447808 | orchestrator | + echo 2026-04-05 07:49:50.447816 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-05 07:49:50.447823 | orchestrator | + echo 2026-04-05 07:49:50.447830 | orchestrator | + osism container testbed-node-0 ps 2026-04-05 07:49:51.982474 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 07:49:51.983509 | orchestrator | 6026562f4cb1 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 10 seconds ago Up 9 seconds (health: starting) magnum_conductor 2026-04-05 07:49:51.983578 | orchestrator | 0333fbf8b40a registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 43 seconds ago Up 42 seconds (healthy) magnum_api 2026-04-05 07:49:51.983587 | orchestrator | 860d1d177556 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 4 minutes ago Up 4 minutes grafana 2026-04-05 07:49:51.983603 | orchestrator | 626ece173d8f registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-05 07:49:51.983611 | orchestrator | 935330a01e0a registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-05 07:49:51.983630 | orchestrator | 9e7f48ee7d03 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-05 07:49:51.983635 | orchestrator | b809f860c168 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-05 07:49:51.983640 | orchestrator | 68cdc0af38da registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-05 07:49:51.983643 | orchestrator | d0701a9443a9 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-05 07:49:51.983662 | orchestrator | d0fa9c142776 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-05 07:49:51.983669 | orchestrator | cd0b0e18ab25 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-05 07:49:51.983673 | orchestrator | f14fd7887c56 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-05 07:49:51.983677 | orchestrator | 4f2f52da7d60 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_worker 2026-04-05 07:49:51.983681 | orchestrator | f6314a31c692 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-05 07:49:51.983685 | orchestrator | 8fb9bc70ebc6 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-05 07:49:51.983689 | orchestrator | 8324273fc26f registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-05 07:49:51.983693 | orchestrator | 44b1ca052125 registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-05 07:49:51.983713 | orchestrator | 30d0899e4912 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-05 07:49:51.983717 | orchestrator | 2c6323764c0c registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-05 07:49:51.983721 | orchestrator | e1c3d6e930ab registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_evaluator 2026-04-05 07:49:51.983725 | orchestrator | 3a99175b9671 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-05 07:49:51.983729 | orchestrator | 2634664165b7 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes ceilometer_central 2026-04-05 07:49:51.983733 | orchestrator | 583fc83948da registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) ceilometer_notification 2026-04-05 07:49:51.983736 | orchestrator | 09456da61d46 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-05 07:49:51.983743 | orchestrator | bc3931ac972c registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-05 07:49:51.983747 | orchestrator | 31be285cb069 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-05 07:49:51.983756 | orchestrator | e1779031813e registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-05 07:49:51.983760 | orchestrator | 04b04b544331 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-05 07:49:51.983764 | orchestrator | 22bfe9fa501a registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-05 07:49:51.983767 | orchestrator | 55f8f98b5ead registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-05 07:49:51.983771 | orchestrator | 4fe0299d3ada registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-05 07:49:51.983775 | orchestrator | 54a85208838b registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-05 07:49:51.983779 | orchestrator | 948afe8f48e6 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-05 07:49:51.983783 | orchestrator | e94c5df94629 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-05 07:49:51.983786 | orchestrator | 685851275bd1 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-05 07:49:51.983790 | orchestrator | 4c27bcd9504a registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-05 07:49:51.983797 | orchestrator | 2c738bd2645e registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) glance_api 2026-04-05 07:49:51.983804 | orchestrator | 56357ed169c4 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-05 07:49:51.983808 | orchestrator | b9bd0386abcf registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-05 07:49:51.983812 | orchestrator | ab24adfe4150 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) horizon 2026-04-05 07:49:51.983816 | orchestrator | b80c4559a9e9 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-05 07:49:51.983819 | orchestrator | 091d25b3ce46 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-05 07:49:51.983823 | orchestrator | cb4263ac517f registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-05 07:49:51.983830 | orchestrator | 2e3c6478e3c1 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 47 minutes (healthy) nova_api 2026-04-05 07:49:51.983834 | orchestrator | 6d59232e4723 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_scheduler 2026-04-05 07:49:51.983838 | orchestrator | 9cfbcb9d012c registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-05 07:49:51.983842 | orchestrator | b90a172e102c registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-05 07:49:51.983845 | orchestrator | 85b07ad9b267 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-05 07:49:51.983849 | orchestrator | 8d09cc288994 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-05 07:49:51.983853 | orchestrator | 8f6ae9e22ae3 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-05 07:49:51.983857 | orchestrator | d7de61a80f35 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-05 07:49:51.983861 | orchestrator | 0d73418ed70f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-0 2026-04-05 07:49:51.983865 | orchestrator | f92e3d403988 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-0 2026-04-05 07:49:51.983868 | orchestrator | cadf84bc192e registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-05 07:49:51.983872 | orchestrator | 99c74fdf4e68 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-05 07:49:51.983876 | orchestrator | 3c39cce59f5e registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-05 07:49:51.983884 | orchestrator | 19c306b97fc5 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-05 07:49:51.983888 | orchestrator | 1d843c5abae3 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-05 07:49:51.983892 | orchestrator | dcff86215b04 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-05 07:49:51.983896 | orchestrator | b13d88bce46b registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-05 07:49:51.983902 | orchestrator | 60d6490d3366 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-05 07:49:51.983907 | orchestrator | 706dd157a63a registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-05 07:49:51.983911 | orchestrator | 25ab7b54a42a registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-05 07:49:51.983915 | orchestrator | 9912aaacafbb registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-05 07:49:51.983919 | orchestrator | 05c5ae4ee098 registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-05 07:49:51.983922 | orchestrator | cbe75516e763 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-05 07:49:51.983929 | orchestrator | c3a518ba1909 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-05 07:49:51.983933 | orchestrator | 710a93f33255 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-05 07:49:51.983937 | orchestrator | 0d4dfe31d18c registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-05 07:49:51.983941 | orchestrator | 4d275909f976 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-05 07:49:51.983944 | orchestrator | 989434a8db43 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-05 07:49:51.983948 | orchestrator | 2f29745f6378 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-05 07:49:51.983952 | orchestrator | 8ddeaa07a8f2 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-05 07:49:52.126440 | orchestrator | 2026-04-05 07:49:52.126537 | orchestrator | ## Images @ testbed-node-0 2026-04-05 07:49:52.126549 | orchestrator | 2026-04-05 07:49:52.126554 | orchestrator | + echo 2026-04-05 07:49:52.126558 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-05 07:49:52.126563 | orchestrator | + echo 2026-04-05 07:49:52.126568 | orchestrator | + osism container testbed-node-0 images 2026-04-05 07:49:53.748999 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 07:49:53.749075 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 7 days ago 288MB 2026-04-05 07:49:53.749081 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 7 days ago 1.54GB 2026-04-05 07:49:53.749102 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 7 days ago 1.57GB 2026-04-05 07:49:53.749106 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 7 days ago 590MB 2026-04-05 07:49:53.749110 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 7 days ago 277MB 2026-04-05 07:49:53.749114 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 7 days ago 1.04GB 2026-04-05 07:49:53.749118 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 7 days ago 427MB 2026-04-05 07:49:53.749122 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 7 days ago 350MB 2026-04-05 07:49:53.749126 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 7 days ago 683MB 2026-04-05 07:49:53.749130 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 7 days ago 277MB 2026-04-05 07:49:53.749133 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 7 days ago 285MB 2026-04-05 07:49:53.749137 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 7 days ago 293MB 2026-04-05 07:49:53.749141 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 7 days ago 293MB 2026-04-05 07:49:53.749145 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 7 days ago 284MB 2026-04-05 07:49:53.749149 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 7 days ago 284MB 2026-04-05 07:49:53.749152 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 7 days ago 1.2GB 2026-04-05 07:49:53.749167 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 7 days ago 463MB 2026-04-05 07:49:53.749171 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 7 days ago 309MB 2026-04-05 07:49:53.749175 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 7 days ago 368MB 2026-04-05 07:49:53.749179 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 7 days ago 303MB 2026-04-05 07:49:53.749182 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 7 days ago 312MB 2026-04-05 07:49:53.749186 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 7 days ago 317MB 2026-04-05 07:49:53.749190 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 7 days ago 301MB 2026-04-05 07:49:53.749194 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 7 days ago 301MB 2026-04-05 07:49:53.749222 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 7 days ago 301MB 2026-04-05 07:49:53.749226 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 7 days ago 301MB 2026-04-05 07:49:53.749230 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 7 days ago 1.09GB 2026-04-05 07:49:53.749237 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 7 days ago 1.06GB 2026-04-05 07:49:53.749241 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 7 days ago 1.05GB 2026-04-05 07:49:53.749257 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 7 days ago 997MB 2026-04-05 07:49:53.749261 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 7 days ago 996MB 2026-04-05 07:49:53.749265 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 7 days ago 1.07GB 2026-04-05 07:49:53.749269 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 7 days ago 1.07GB 2026-04-05 07:49:53.749272 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 7 days ago 1.05GB 2026-04-05 07:49:53.749276 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 7 days ago 1.05GB 2026-04-05 07:49:53.749280 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 7 days ago 1.05GB 2026-04-05 07:49:53.749283 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 7 days ago 996MB 2026-04-05 07:49:53.749287 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 7 days ago 995MB 2026-04-05 07:49:53.749291 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 7 days ago 995MB 2026-04-05 07:49:53.749295 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 7 days ago 995MB 2026-04-05 07:49:53.749299 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 7 days ago 994MB 2026-04-05 07:49:53.749302 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 7 days ago 1.12GB 2026-04-05 07:49:53.749306 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 7 days ago 1.79GB 2026-04-05 07:49:53.749310 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 7 days ago 1.43GB 2026-04-05 07:49:53.749314 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 7 days ago 1.43GB 2026-04-05 07:49:53.749318 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 7 days ago 1.44GB 2026-04-05 07:49:53.749321 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 7 days ago 1.24GB 2026-04-05 07:49:53.749328 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 7 days ago 1.07GB 2026-04-05 07:49:53.749332 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 7 days ago 1.02GB 2026-04-05 07:49:53.749336 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 7 days ago 1GB 2026-04-05 07:49:53.749340 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 7 days ago 1GB 2026-04-05 07:49:53.749343 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 7 days ago 1GB 2026-04-05 07:49:53.749350 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 7 days ago 1.27GB 2026-04-05 07:49:53.749354 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 7 days ago 1.15GB 2026-04-05 07:49:53.749358 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 7 days ago 1.01GB 2026-04-05 07:49:53.749361 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 7 days ago 1GB 2026-04-05 07:49:53.749368 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 7 days ago 1GB 2026-04-05 07:49:53.749372 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 7 days ago 1.01GB 2026-04-05 07:49:53.749376 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 7 days ago 1GB 2026-04-05 07:49:53.749380 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 7 days ago 1GB 2026-04-05 07:49:53.749388 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 7 days ago 1.23GB 2026-04-05 07:49:53.749392 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 7 days ago 1.39GB 2026-04-05 07:49:53.749396 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 7 days ago 1.23GB 2026-04-05 07:49:53.749400 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 7 days ago 1.23GB 2026-04-05 07:49:53.749403 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 7 days ago 1.07GB 2026-04-05 07:49:53.749407 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 7 days ago 1.07GB 2026-04-05 07:49:53.749411 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 7 days ago 1.07GB 2026-04-05 07:49:53.749415 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 7 days ago 1.24GB 2026-04-05 07:49:53.749419 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 7 days ago 301MB 2026-04-05 07:49:53.749422 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-05 07:49:53.749426 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-05 07:49:53.749430 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-05 07:49:53.749434 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-05 07:49:53.749437 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-05 07:49:53.749441 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 07:49:53.749445 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 07:49:53.749449 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-05 07:49:53.749452 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-05 07:49:53.749459 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-05 07:49:53.749463 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 07:49:53.749467 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-05 07:49:53.749471 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-05 07:49:53.749481 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-05 07:49:53.749487 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-05 07:49:53.749491 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-05 07:49:53.749495 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-05 07:49:53.749499 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 07:49:53.749503 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-05 07:49:53.749506 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 07:49:53.749510 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-05 07:49:53.749517 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-05 07:49:53.749521 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-05 07:49:53.749525 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-05 07:49:53.749529 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-05 07:49:53.749532 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-05 07:49:53.749536 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-05 07:49:53.749540 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-05 07:49:53.749544 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-05 07:49:53.749548 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-05 07:49:53.749551 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-05 07:49:53.749555 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-05 07:49:53.749559 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-05 07:49:53.749563 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-05 07:49:53.749570 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-05 07:49:53.749573 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-05 07:49:53.749577 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-05 07:49:53.749581 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-05 07:49:53.749585 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-05 07:49:53.749589 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-05 07:49:53.749593 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-05 07:49:53.749597 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-05 07:49:53.749600 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-05 07:49:53.749604 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-05 07:49:53.749608 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-05 07:49:53.749612 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-05 07:49:53.749640 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-05 07:49:53.749644 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-05 07:49:53.749648 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-05 07:49:53.749652 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-05 07:49:53.749655 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-05 07:49:53.749662 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-05 07:49:53.749666 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-05 07:49:53.749673 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-05 07:49:53.749677 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-05 07:49:53.749680 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-05 07:49:53.749684 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-05 07:49:53.749688 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-05 07:49:53.749692 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-05 07:49:53.749699 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-05 07:49:53.749703 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-05 07:49:53.749706 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-05 07:49:53.749710 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-05 07:49:53.749714 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-05 07:49:53.749718 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-05 07:49:53.749722 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-05 07:49:53.749725 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-05 07:49:53.749729 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-05 07:49:53.749733 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-05 07:49:53.895577 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 07:49:53.896379 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-05 07:49:53.960032 | orchestrator | 2026-04-05 07:49:53.960116 | orchestrator | ## Containers @ testbed-node-1 2026-04-05 07:49:53.960124 | orchestrator | 2026-04-05 07:49:53.960128 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 07:49:53.960132 | orchestrator | + echo 2026-04-05 07:49:53.960138 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-05 07:49:53.960142 | orchestrator | + echo 2026-04-05 07:49:53.960146 | orchestrator | + osism container testbed-node-1 ps 2026-04-05 07:49:55.533155 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 07:49:55.533261 | orchestrator | 86468b309c48 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 14 seconds ago Up 12 seconds (health: starting) magnum_conductor 2026-04-05 07:49:55.533280 | orchestrator | b02051dcfab1 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 37 seconds ago Up 36 seconds (healthy) magnum_api 2026-04-05 07:49:55.533294 | orchestrator | 8817e03b4435 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-05 07:49:55.533305 | orchestrator | 9e7167db28eb registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-05 07:49:55.533319 | orchestrator | 586e5b50ecbd registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-05 07:49:55.533331 | orchestrator | 269978bf3259 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-05 07:49:55.533342 | orchestrator | 5c09193f652a registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-05 07:49:55.533390 | orchestrator | 3866829a5d5d registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-05 07:49:55.533406 | orchestrator | bd69983fb43c registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-05 07:49:55.533417 | orchestrator | 758613ea1457 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-05 07:49:55.533428 | orchestrator | 616e20831496 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-05 07:49:55.533439 | orchestrator | 2bde45b3201f registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_api 2026-04-05 07:49:55.533450 | orchestrator | 5be8ba43757a registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-05 07:49:55.533461 | orchestrator | 49348a91d38a registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-05 07:49:55.533472 | orchestrator | 768c9fb728bf registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-05 07:49:55.533499 | orchestrator | ba264ba28b52 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-05 07:49:55.533510 | orchestrator | 93ea9ed1351d registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_api 2026-04-05 07:49:55.533541 | orchestrator | da733dee4864 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-05 07:49:55.533553 | orchestrator | d84e6098adc8 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-05 07:49:55.533564 | orchestrator | 1f9c495eb521 registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_evaluator 2026-04-05 07:49:55.533580 | orchestrator | d56bdab34a26 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-05 07:49:55.533592 | orchestrator | 26efef0fb723 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes ceilometer_central 2026-04-05 07:49:55.533603 | orchestrator | 2d333a72d8d8 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) ceilometer_notification 2026-04-05 07:49:55.533652 | orchestrator | 743caf5a4d13 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-05 07:49:55.533673 | orchestrator | 8b293a13a973 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-05 07:49:55.533687 | orchestrator | d6d3ec33d743 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-05 07:49:55.533700 | orchestrator | 9d13d108c47b registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-05 07:49:55.533712 | orchestrator | f78e96c60be1 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-05 07:49:55.533725 | orchestrator | 5e0408e4420f registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-05 07:49:55.533739 | orchestrator | f97d01dd63c8 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-05 07:49:55.533752 | orchestrator | f679d00d038b registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-05 07:49:55.533765 | orchestrator | c3fe74a589a5 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-05 07:49:55.533778 | orchestrator | 7221a60b0fdf registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-05 07:49:55.533791 | orchestrator | 4205e2ba4a0d registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-05 07:49:55.533803 | orchestrator | 869029963e58 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-05 07:49:55.533817 | orchestrator | 431a178d59b6 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-05 07:49:55.533836 | orchestrator | ad2c4a25ae85 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-05 07:49:55.533850 | orchestrator | a35bce3807d2 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-05 07:49:55.533863 | orchestrator | 3bd558bba1fd registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-05 07:49:55.533877 | orchestrator | ac31fcf0f2d9 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) horizon 2026-04-05 07:49:55.533895 | orchestrator | 825447278ae9 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-05 07:49:55.533906 | orchestrator | 0ec86e5f5fcf registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-05 07:49:55.533917 | orchestrator | 5c0ce4a956a8 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-05 07:49:55.533928 | orchestrator | d9ad93c75446 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_api 2026-04-05 07:49:55.533939 | orchestrator | 1acf33dcfe6d registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_scheduler 2026-04-05 07:49:55.533950 | orchestrator | 65a435322faa registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-05 07:49:55.533961 | orchestrator | 5c8ede2fb361 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-05 07:49:55.533979 | orchestrator | f99a041e0349 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-05 07:49:55.533991 | orchestrator | eb4afd7d1e87 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-05 07:49:55.534002 | orchestrator | 8d632d686061 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-05 07:49:55.534059 | orchestrator | 758d09b0e55c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-05 07:49:55.534074 | orchestrator | 250285928b9c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-1 2026-04-05 07:49:55.534085 | orchestrator | 9dc375a9b789 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-1 2026-04-05 07:49:55.534096 | orchestrator | 805e620dfe3e registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-05 07:49:55.534107 | orchestrator | 043edb46a033 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-05 07:49:55.534118 | orchestrator | 5936d2168df6 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-05 07:49:55.534136 | orchestrator | 6243c3031e2d registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-05 07:49:55.534156 | orchestrator | 341bf97a4259 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-05 07:49:55.534167 | orchestrator | a4f1dc4da2d7 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-05 07:49:55.534183 | orchestrator | 5d1abe830af5 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-05 07:49:55.534194 | orchestrator | 5a1d490d690a registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-05 07:49:55.534206 | orchestrator | a44e991ca8cd registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-05 07:49:55.534217 | orchestrator | b8de446bab81 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-05 07:49:55.534227 | orchestrator | 3941cc0df72e registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-05 07:49:55.534238 | orchestrator | 285bd8c4657d registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-05 07:49:55.534249 | orchestrator | 034bd8cf18b7 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-05 07:49:55.534261 | orchestrator | f159892dd722 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-05 07:49:55.534271 | orchestrator | ac6c1a5386b1 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-05 07:49:55.534282 | orchestrator | e047106d6757 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-05 07:49:55.534293 | orchestrator | 4a5dea9601eb registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-05 07:49:55.534305 | orchestrator | 4f955b214487 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-05 07:49:55.534316 | orchestrator | 26f7cc1a29de registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-05 07:49:55.534327 | orchestrator | a25c99467662 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-05 07:49:55.668204 | orchestrator | 2026-04-05 07:49:55.668307 | orchestrator | ## Images @ testbed-node-1 2026-04-05 07:49:55.668322 | orchestrator | 2026-04-05 07:49:55.668335 | orchestrator | + echo 2026-04-05 07:49:55.668347 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-05 07:49:55.668384 | orchestrator | + echo 2026-04-05 07:49:55.668397 | orchestrator | + osism container testbed-node-1 images 2026-04-05 07:49:57.250906 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 07:49:57.251030 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 7 days ago 288MB 2026-04-05 07:49:57.251047 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 7 days ago 1.54GB 2026-04-05 07:49:57.251060 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 7 days ago 1.57GB 2026-04-05 07:49:57.251071 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 7 days ago 590MB 2026-04-05 07:49:57.251082 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 7 days ago 277MB 2026-04-05 07:49:57.251094 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 7 days ago 1.04GB 2026-04-05 07:49:57.251105 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 7 days ago 350MB 2026-04-05 07:49:57.251118 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 7 days ago 427MB 2026-04-05 07:49:57.251242 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 7 days ago 683MB 2026-04-05 07:49:57.251259 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 7 days ago 277MB 2026-04-05 07:49:57.251270 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 7 days ago 285MB 2026-04-05 07:49:57.251282 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 7 days ago 293MB 2026-04-05 07:49:57.251293 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 7 days ago 293MB 2026-04-05 07:49:57.251304 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 7 days ago 284MB 2026-04-05 07:49:57.251332 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 7 days ago 284MB 2026-04-05 07:49:57.251344 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 7 days ago 1.2GB 2026-04-05 07:49:57.251355 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 7 days ago 463MB 2026-04-05 07:49:57.251366 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 7 days ago 309MB 2026-04-05 07:49:57.251382 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 7 days ago 368MB 2026-04-05 07:49:57.251400 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 7 days ago 303MB 2026-04-05 07:49:57.251419 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 7 days ago 312MB 2026-04-05 07:49:57.251437 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 7 days ago 317MB 2026-04-05 07:49:57.251457 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 7 days ago 301MB 2026-04-05 07:49:57.251475 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 7 days ago 301MB 2026-04-05 07:49:57.251524 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 7 days ago 301MB 2026-04-05 07:49:57.251546 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 7 days ago 301MB 2026-04-05 07:49:57.251566 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 7 days ago 1.09GB 2026-04-05 07:49:57.251586 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 7 days ago 1.06GB 2026-04-05 07:49:57.251606 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 7 days ago 1.05GB 2026-04-05 07:49:57.251680 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 7 days ago 997MB 2026-04-05 07:49:57.251701 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 7 days ago 996MB 2026-04-05 07:49:57.251718 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 7 days ago 1.07GB 2026-04-05 07:49:57.251736 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 7 days ago 1.07GB 2026-04-05 07:49:57.251755 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 7 days ago 1.05GB 2026-04-05 07:49:57.251774 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 7 days ago 1.05GB 2026-04-05 07:49:57.251792 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 7 days ago 1.05GB 2026-04-05 07:49:57.251811 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 7 days ago 996MB 2026-04-05 07:49:57.251841 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 7 days ago 995MB 2026-04-05 07:49:57.251861 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 7 days ago 995MB 2026-04-05 07:49:57.251880 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 7 days ago 995MB 2026-04-05 07:49:57.251900 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 7 days ago 994MB 2026-04-05 07:49:57.251917 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 7 days ago 1.12GB 2026-04-05 07:49:57.251933 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 7 days ago 1.79GB 2026-04-05 07:49:57.251944 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 7 days ago 1.43GB 2026-04-05 07:49:57.251955 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 7 days ago 1.43GB 2026-04-05 07:49:57.251966 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 7 days ago 1.44GB 2026-04-05 07:49:57.251977 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 7 days ago 1.24GB 2026-04-05 07:49:57.251988 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 7 days ago 1.07GB 2026-04-05 07:49:57.251999 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 7 days ago 1.02GB 2026-04-05 07:49:57.252021 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 7 days ago 1GB 2026-04-05 07:49:57.252033 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 7 days ago 1GB 2026-04-05 07:49:57.252044 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 7 days ago 1GB 2026-04-05 07:49:57.252055 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 7 days ago 1.27GB 2026-04-05 07:49:57.252066 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 7 days ago 1.15GB 2026-04-05 07:49:57.252077 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 7 days ago 1.01GB 2026-04-05 07:49:57.252088 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 7 days ago 1GB 2026-04-05 07:49:57.252099 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 7 days ago 1GB 2026-04-05 07:49:57.252109 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 7 days ago 1.01GB 2026-04-05 07:49:57.252120 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 7 days ago 1GB 2026-04-05 07:49:57.252131 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 7 days ago 1GB 2026-04-05 07:49:57.252152 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 7 days ago 1.23GB 2026-04-05 07:49:57.252164 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 7 days ago 1.39GB 2026-04-05 07:49:57.252175 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 7 days ago 1.23GB 2026-04-05 07:49:57.252185 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 7 days ago 1.23GB 2026-04-05 07:49:57.252196 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 7 days ago 1.07GB 2026-04-05 07:49:57.252207 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 7 days ago 1.07GB 2026-04-05 07:49:57.252217 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 7 days ago 1.07GB 2026-04-05 07:49:57.252228 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 7 days ago 1.24GB 2026-04-05 07:49:57.252239 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 7 days ago 301MB 2026-04-05 07:49:57.252250 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-05 07:49:57.252261 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-05 07:49:57.252271 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-05 07:49:57.252282 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-05 07:49:57.252293 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-05 07:49:57.252311 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 07:49:57.252321 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 07:49:57.252332 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-05 07:49:57.252343 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-05 07:49:57.252360 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-05 07:49:57.252371 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 07:49:57.252382 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-05 07:49:57.252398 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-05 07:49:57.252409 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-05 07:49:57.252420 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-05 07:49:57.252431 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-05 07:49:57.252442 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-05 07:49:57.252453 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 07:49:57.252464 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-05 07:49:57.252475 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 07:49:57.252485 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-05 07:49:57.252503 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-05 07:49:57.252515 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-05 07:49:57.252526 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-05 07:49:57.252537 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-05 07:49:57.252547 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-05 07:49:57.252558 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-05 07:49:57.252569 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-05 07:49:57.252580 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-05 07:49:57.252595 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-05 07:49:57.252635 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-05 07:49:57.252657 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-05 07:49:57.252676 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-05 07:49:57.252694 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-05 07:49:57.252713 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-05 07:49:57.252732 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-05 07:49:57.252751 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-05 07:49:57.252770 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-05 07:49:57.252787 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-05 07:49:57.252803 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-05 07:49:57.252814 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-05 07:49:57.252824 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-05 07:49:57.252836 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-05 07:49:57.252847 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-05 07:49:57.252858 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-05 07:49:57.252869 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-05 07:49:57.252879 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-05 07:49:57.252890 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-05 07:49:57.252901 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-05 07:49:57.252912 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-05 07:49:57.252923 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-05 07:49:57.252942 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-05 07:49:57.252954 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-05 07:49:57.252964 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-05 07:49:57.252975 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-05 07:49:57.252996 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-05 07:49:57.253013 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-05 07:49:57.253031 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-05 07:49:57.253049 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-05 07:49:57.253068 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-05 07:49:57.253086 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-05 07:49:57.253105 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-05 07:49:57.253123 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-05 07:49:57.253134 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-05 07:49:57.253145 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-05 07:49:57.253155 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-05 07:49:57.253166 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-05 07:49:57.253177 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-05 07:49:57.253188 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-05 07:49:57.404020 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-05 07:49:57.404276 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-05 07:49:57.464578 | orchestrator | 2026-04-05 07:49:57.464763 | orchestrator | ## Containers @ testbed-node-2 2026-04-05 07:49:57.464790 | orchestrator | 2026-04-05 07:49:57.464809 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 07:49:57.464827 | orchestrator | + echo 2026-04-05 07:49:57.464847 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-05 07:49:57.464867 | orchestrator | + echo 2026-04-05 07:49:57.464885 | orchestrator | + osism container testbed-node-2 ps 2026-04-05 07:49:59.016159 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-05 07:49:59.016249 | orchestrator | 09c62be76717 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 23 seconds ago Up 21 seconds (health: starting) magnum_conductor 2026-04-05 07:49:59.016265 | orchestrator | abf29e109869 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 42 seconds ago Up 40 seconds (healthy) magnum_api 2026-04-05 07:49:59.016275 | orchestrator | d5a068483f29 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-05 07:49:59.016284 | orchestrator | 902b9562e149 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-05 07:49:59.016318 | orchestrator | f9a4230c7fd3 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-05 07:49:59.016343 | orchestrator | 222d0fd0d373 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-05 07:49:59.016353 | orchestrator | 671a383b6112 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-05 07:49:59.016362 | orchestrator | 2a021cbd3de8 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-05 07:49:59.016371 | orchestrator | 3807485d5948 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-05 07:49:59.016380 | orchestrator | 6377fd491f02 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-05 07:49:59.016389 | orchestrator | b5723296bb31 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-05 07:49:59.016402 | orchestrator | 19469dcf4de8 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-05 07:49:59.016411 | orchestrator | b1c76d02bcea registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-05 07:49:59.016420 | orchestrator | f62067474826 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-05 07:49:59.016429 | orchestrator | eae32f1f14a2 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-05 07:49:59.016438 | orchestrator | 767306d1e01d registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-05 07:49:59.016447 | orchestrator | f704bb70ccc1 registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 16 minutes (healthy) octavia_api 2026-04-05 07:49:59.016470 | orchestrator | a9fca80eb5ed registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-05 07:49:59.016480 | orchestrator | d26bd6ae688f registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-05 07:49:59.016489 | orchestrator | 3bff8ec5916e registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_evaluator 2026-04-05 07:49:59.016498 | orchestrator | f4def9cb6848 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-05 07:49:59.016513 | orchestrator | c4f93c1643e4 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes ceilometer_central 2026-04-05 07:49:59.016522 | orchestrator | d03390be96ad registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-05 07:49:59.016531 | orchestrator | bd7d26cee586 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-05 07:49:59.016540 | orchestrator | f54d00e20503 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-05 07:49:59.016549 | orchestrator | 49c5c02608c6 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-05 07:49:59.016558 | orchestrator | a9b75c4c0ef0 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-05 07:49:59.017186 | orchestrator | 9fcb7df651c3 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-05 07:49:59.017211 | orchestrator | 658e2de4be87 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-04-05 07:49:59.017221 | orchestrator | b297c3c72780 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-04-05 07:49:59.017237 | orchestrator | 82cbc673467f registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-05 07:49:59.017246 | orchestrator | 912f27b967f1 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-05 07:49:59.017255 | orchestrator | c0b0c75ae476 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-05 07:49:59.017264 | orchestrator | afe554b35bb9 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-05 07:49:59.017273 | orchestrator | 4c102c9f0282 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-05 07:49:59.017281 | orchestrator | df0030f58dca registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-05 07:49:59.017290 | orchestrator | d033ed61c410 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-05 07:49:59.017299 | orchestrator | 8c3f616a15fc registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-05 07:49:59.017317 | orchestrator | 12cd44d3947c registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 45 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-05 07:49:59.017326 | orchestrator | 6846a37ea237 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) horizon 2026-04-05 07:49:59.017335 | orchestrator | f990f5ab0331 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_novncproxy 2026-04-05 07:49:59.017344 | orchestrator | f82df2f96c63 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 48 minutes (healthy) nova_conductor 2026-04-05 07:49:59.017353 | orchestrator | 77d1a960a46d registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-05 07:49:59.017361 | orchestrator | 454c3c2ed24f registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_api 2026-04-05 07:49:59.017370 | orchestrator | e53864e166c4 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_scheduler 2026-04-05 07:49:59.017379 | orchestrator | 5e6270be2df3 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-05 07:49:59.017388 | orchestrator | 85e685ce4186 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-05 07:49:59.017405 | orchestrator | 47385358b929 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-05 07:49:59.017414 | orchestrator | 0df82a7976f1 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-05 07:49:59.017423 | orchestrator | 697e8de395e3 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-05 07:49:59.017432 | orchestrator | 79057daeed62 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-05 07:49:59.017440 | orchestrator | 991df31bd8df registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-2 2026-04-05 07:49:59.017449 | orchestrator | f7093b7c0357 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-2 2026-04-05 07:49:59.017458 | orchestrator | 8827e2e72de5 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_northd 2026-04-05 07:49:59.017467 | orchestrator | 572f87315272 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db_relay_1 2026-04-05 07:49:59.017486 | orchestrator | 2c74882bb5fb registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_sb_db 2026-04-05 07:49:59.017496 | orchestrator | 1b3608bbbb83 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-05 07:49:59.017505 | orchestrator | 094f53f27ca2 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-05 07:49:59.017513 | orchestrator | 968f2656a7d6 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-05 07:49:59.017522 | orchestrator | c11df2853113 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-05 07:49:59.017531 | orchestrator | 809c3af2e189 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-05 07:49:59.017540 | orchestrator | 65dd81cd6665 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-05 07:49:59.017549 | orchestrator | 9a11358923d0 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-05 07:49:59.017557 | orchestrator | b7d7000e0fd3 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-05 07:49:59.017566 | orchestrator | 0a0bde495b0d registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-05 07:49:59.017575 | orchestrator | 6c3f3586e9e0 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-05 07:49:59.017589 | orchestrator | 999c80bd5e11 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-05 07:49:59.017599 | orchestrator | 72cc08590ad0 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-05 07:49:59.017639 | orchestrator | 9beb23b59a44 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-05 07:49:59.017650 | orchestrator | 1a447de65fd2 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-05 07:49:59.017659 | orchestrator | b7874f640c3d registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-05 07:49:59.017667 | orchestrator | 0a2ec02d2984 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-05 07:49:59.017742 | orchestrator | e60cc6dee61d registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-05 07:49:59.167428 | orchestrator | 2026-04-05 07:49:59.167521 | orchestrator | ## Images @ testbed-node-2 2026-04-05 07:49:59.167536 | orchestrator | 2026-04-05 07:49:59.167548 | orchestrator | + echo 2026-04-05 07:49:59.167560 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-05 07:49:59.167572 | orchestrator | + echo 2026-04-05 07:49:59.167583 | orchestrator | + osism container testbed-node-2 images 2026-04-05 07:50:00.766787 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-05 07:50:00.766888 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 7 days ago 288MB 2026-04-05 07:50:00.766914 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 7 days ago 1.54GB 2026-04-05 07:50:00.766934 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 7 days ago 1.57GB 2026-04-05 07:50:00.766953 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 7 days ago 590MB 2026-04-05 07:50:00.766972 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 7 days ago 277MB 2026-04-05 07:50:00.766991 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 7 days ago 1.04GB 2026-04-05 07:50:00.767013 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 7 days ago 427MB 2026-04-05 07:50:00.767032 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 7 days ago 350MB 2026-04-05 07:50:00.767053 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 7 days ago 683MB 2026-04-05 07:50:00.767065 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 7 days ago 277MB 2026-04-05 07:50:00.767076 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 7 days ago 285MB 2026-04-05 07:50:00.767087 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 7 days ago 293MB 2026-04-05 07:50:00.767098 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 7 days ago 293MB 2026-04-05 07:50:00.767109 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 7 days ago 284MB 2026-04-05 07:50:00.767120 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 7 days ago 284MB 2026-04-05 07:50:00.767131 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 7 days ago 1.2GB 2026-04-05 07:50:00.767142 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 7 days ago 463MB 2026-04-05 07:50:00.767153 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 7 days ago 309MB 2026-04-05 07:50:00.767164 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 7 days ago 368MB 2026-04-05 07:50:00.767175 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 7 days ago 303MB 2026-04-05 07:50:00.767309 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 7 days ago 312MB 2026-04-05 07:50:00.767342 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 7 days ago 317MB 2026-04-05 07:50:00.767356 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 7 days ago 301MB 2026-04-05 07:50:00.767371 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 7 days ago 301MB 2026-04-05 07:50:00.767383 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 7 days ago 301MB 2026-04-05 07:50:00.767396 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 7 days ago 301MB 2026-04-05 07:50:00.767409 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 7 days ago 1.09GB 2026-04-05 07:50:00.767423 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 7 days ago 1.06GB 2026-04-05 07:50:00.767444 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 7 days ago 1.05GB 2026-04-05 07:50:00.767492 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 7 days ago 997MB 2026-04-05 07:50:00.767507 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 7 days ago 996MB 2026-04-05 07:50:00.767520 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 7 days ago 1.07GB 2026-04-05 07:50:00.767534 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 7 days ago 1.07GB 2026-04-05 07:50:00.767548 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 7 days ago 1.05GB 2026-04-05 07:50:00.767561 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 7 days ago 1.05GB 2026-04-05 07:50:00.767574 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 7 days ago 1.05GB 2026-04-05 07:50:00.767587 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 7 days ago 996MB 2026-04-05 07:50:00.767600 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 7 days ago 995MB 2026-04-05 07:50:00.767640 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 7 days ago 995MB 2026-04-05 07:50:00.767653 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 7 days ago 995MB 2026-04-05 07:50:00.767664 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 7 days ago 994MB 2026-04-05 07:50:00.767675 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 7 days ago 1.12GB 2026-04-05 07:50:00.767686 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 7 days ago 1.79GB 2026-04-05 07:50:00.767698 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 7 days ago 1.43GB 2026-04-05 07:50:00.767709 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 7 days ago 1.43GB 2026-04-05 07:50:00.767730 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 7 days ago 1.44GB 2026-04-05 07:50:00.767741 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 7 days ago 1.24GB 2026-04-05 07:50:00.767752 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 7 days ago 1.07GB 2026-04-05 07:50:00.767763 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 7 days ago 1.02GB 2026-04-05 07:50:00.767774 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 7 days ago 1GB 2026-04-05 07:50:00.767785 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 7 days ago 1GB 2026-04-05 07:50:00.767796 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 7 days ago 1GB 2026-04-05 07:50:00.767807 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 7 days ago 1.27GB 2026-04-05 07:50:00.767819 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 7 days ago 1.15GB 2026-04-05 07:50:00.767830 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 7 days ago 1.01GB 2026-04-05 07:50:00.767841 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 7 days ago 1GB 2026-04-05 07:50:00.767853 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 7 days ago 1GB 2026-04-05 07:50:00.767864 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 7 days ago 1.01GB 2026-04-05 07:50:00.767875 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 7 days ago 1GB 2026-04-05 07:50:00.767886 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 7 days ago 1GB 2026-04-05 07:50:00.767905 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 7 days ago 1.23GB 2026-04-05 07:50:00.767925 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 7 days ago 1.39GB 2026-04-05 07:50:00.767937 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 7 days ago 1.23GB 2026-04-05 07:50:00.767948 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 7 days ago 1.23GB 2026-04-05 07:50:00.767959 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 7 days ago 1.07GB 2026-04-05 07:50:00.767970 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 7 days ago 1.07GB 2026-04-05 07:50:00.767980 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 7 days ago 1.07GB 2026-04-05 07:50:00.767992 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 7 days ago 1.24GB 2026-04-05 07:50:00.768003 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 7 days ago 301MB 2026-04-05 07:50:00.768013 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-05 07:50:00.768189 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-05 07:50:00.768205 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-05 07:50:00.768216 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-05 07:50:00.768227 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-05 07:50:00.768239 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-05 07:50:00.768250 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-05 07:50:00.768261 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-05 07:50:00.768272 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-05 07:50:00.768283 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-05 07:50:00.768294 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-05 07:50:00.768305 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-05 07:50:00.768316 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-05 07:50:00.768333 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-05 07:50:00.768344 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-05 07:50:00.768355 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-05 07:50:00.768367 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-05 07:50:00.768377 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-05 07:50:00.768388 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-05 07:50:00.768399 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-05 07:50:00.768410 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-05 07:50:00.768421 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-05 07:50:00.768432 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-05 07:50:00.768443 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-05 07:50:00.768454 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-05 07:50:00.768465 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-05 07:50:00.768484 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-05 07:50:00.768495 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-05 07:50:00.768507 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-05 07:50:00.768518 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-05 07:50:00.768529 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-05 07:50:00.768546 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-05 07:50:00.768558 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-05 07:50:00.768569 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-05 07:50:00.768580 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-05 07:50:00.768591 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-05 07:50:00.768602 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-05 07:50:00.768713 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-05 07:50:00.768730 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-05 07:50:00.768742 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-05 07:50:00.768754 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-05 07:50:00.768765 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-05 07:50:00.768777 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-05 07:50:00.768788 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-05 07:50:00.768805 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-05 07:50:00.768817 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-05 07:50:00.768828 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-05 07:50:00.768840 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-05 07:50:00.768851 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-05 07:50:00.768863 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-05 07:50:00.768874 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-05 07:50:00.768894 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-05 07:50:00.768906 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-05 07:50:00.768917 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-05 07:50:00.768928 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-05 07:50:00.768940 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-05 07:50:00.768951 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-05 07:50:00.768963 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-05 07:50:00.768975 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-05 07:50:00.768986 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-05 07:50:00.768997 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-05 07:50:00.769009 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-05 07:50:00.769027 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-05 07:50:00.769040 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-05 07:50:00.769051 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-05 07:50:00.769063 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-05 07:50:00.769075 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-05 07:50:00.769089 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-05 07:50:00.769108 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-05 07:50:00.918607 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-05 07:50:00.927974 | orchestrator | + set -e 2026-04-05 07:50:00.928055 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 07:50:00.928071 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 07:50:00.928083 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 07:50:00.928095 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 07:50:00.928106 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 07:50:00.928118 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 07:50:00.928131 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 07:50:00.928142 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 07:50:00.928154 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 07:50:00.928165 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 07:50:00.928176 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 07:50:00.928187 | orchestrator | ++ export ARA=false 2026-04-05 07:50:00.928199 | orchestrator | ++ ARA=false 2026-04-05 07:50:00.928210 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 07:50:00.928221 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 07:50:00.928232 | orchestrator | ++ export TEMPEST=false 2026-04-05 07:50:00.928244 | orchestrator | ++ TEMPEST=false 2026-04-05 07:50:00.928255 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 07:50:00.928267 | orchestrator | ++ IS_ZUUL=true 2026-04-05 07:50:00.928305 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 07:50:00.928317 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 07:50:00.928329 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 07:50:00.928340 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 07:50:00.928351 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 07:50:00.928362 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 07:50:00.928373 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 07:50:00.928384 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 07:50:00.928395 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 07:50:00.928407 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 07:50:00.928418 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-05 07:50:00.928429 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-05 07:50:00.928440 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-05 07:50:00.928452 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-05 07:50:00.937131 | orchestrator | + set -e 2026-04-05 07:50:00.937264 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 07:50:00.937282 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 07:50:00.937296 | orchestrator | ++ INTERACTIVE=false 2026-04-05 07:50:00.937307 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 07:50:00.937319 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 07:50:00.937437 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 07:50:00.938903 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 07:50:00.945419 | orchestrator | 2026-04-05 07:50:00.945480 | orchestrator | # Ceph status 2026-04-05 07:50:00.945494 | orchestrator | 2026-04-05 07:50:00.945506 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 07:50:00.945518 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 07:50:00.945530 | orchestrator | + echo 2026-04-05 07:50:00.945541 | orchestrator | + echo '# Ceph status' 2026-04-05 07:50:00.945552 | orchestrator | + echo 2026-04-05 07:50:00.945564 | orchestrator | + ceph -s 2026-04-05 07:50:01.575265 | orchestrator | cluster: 2026-04-05 07:50:01.575347 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-05 07:50:01.575358 | orchestrator | health: HEALTH_OK 2026-04-05 07:50:01.575367 | orchestrator | 2026-04-05 07:50:01.575374 | orchestrator | services: 2026-04-05 07:50:01.575381 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 2h) 2026-04-05 07:50:01.575398 | orchestrator | mgr: testbed-node-0(active, since 2h), standbys: testbed-node-2, testbed-node-1 2026-04-05 07:50:01.575406 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-05 07:50:01.575413 | orchestrator | osd: 6 osds: 6 up (since 108m), 6 in (since 4h) 2026-04-05 07:50:01.575420 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-05 07:50:01.575427 | orchestrator | 2026-04-05 07:50:01.575433 | orchestrator | data: 2026-04-05 07:50:01.575439 | orchestrator | volumes: 1/1 healthy 2026-04-05 07:50:01.575446 | orchestrator | pools: 14 pools, 401 pgs 2026-04-05 07:50:01.575453 | orchestrator | objects: 820 objects, 2.8 GiB 2026-04-05 07:50:01.575460 | orchestrator | usage: 8.0 GiB used, 112 GiB / 120 GiB avail 2026-04-05 07:50:01.575466 | orchestrator | pgs: 401 active+clean 2026-04-05 07:50:01.575473 | orchestrator | 2026-04-05 07:50:01.575479 | orchestrator | io: 2026-04-05 07:50:01.575485 | orchestrator | client: 850 B/s rd, 0 op/s rd, 0 op/s wr 2026-04-05 07:50:01.575492 | orchestrator | 2026-04-05 07:50:01.623009 | orchestrator | 2026-04-05 07:50:01.623089 | orchestrator | # Ceph versions 2026-04-05 07:50:01.623102 | orchestrator | 2026-04-05 07:50:01.623114 | orchestrator | + echo 2026-04-05 07:50:01.623125 | orchestrator | + echo '# Ceph versions' 2026-04-05 07:50:01.623137 | orchestrator | + echo 2026-04-05 07:50:01.623148 | orchestrator | + ceph versions 2026-04-05 07:50:02.234989 | orchestrator | { 2026-04-05 07:50:02.235089 | orchestrator | "mon": { 2026-04-05 07:50:02.235105 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 07:50:02.235118 | orchestrator | }, 2026-04-05 07:50:02.235130 | orchestrator | "mgr": { 2026-04-05 07:50:02.235141 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 07:50:02.235152 | orchestrator | }, 2026-04-05 07:50:02.235163 | orchestrator | "osd": { 2026-04-05 07:50:02.235174 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-05 07:50:02.235185 | orchestrator | }, 2026-04-05 07:50:02.235196 | orchestrator | "mds": { 2026-04-05 07:50:02.235207 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 07:50:02.235244 | orchestrator | }, 2026-04-05 07:50:02.235255 | orchestrator | "rgw": { 2026-04-05 07:50:02.235267 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-05 07:50:02.235278 | orchestrator | }, 2026-04-05 07:50:02.235289 | orchestrator | "overall": { 2026-04-05 07:50:02.235301 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-05 07:50:02.235312 | orchestrator | } 2026-04-05 07:50:02.235323 | orchestrator | } 2026-04-05 07:50:02.293581 | orchestrator | 2026-04-05 07:50:02.293770 | orchestrator | # Ceph OSD tree 2026-04-05 07:50:02.293800 | orchestrator | 2026-04-05 07:50:02.293820 | orchestrator | + echo 2026-04-05 07:50:02.293841 | orchestrator | + echo '# Ceph OSD tree' 2026-04-05 07:50:02.294744 | orchestrator | + echo 2026-04-05 07:50:02.294784 | orchestrator | + ceph osd df tree 2026-04-05 07:50:02.804839 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-05 07:50:02.804947 | orchestrator | -1 0.11691 - 120 GiB 8.0 GiB 7.6 GiB 45 KiB 330 MiB 112 GiB 6.63 1.00 - root default 2026-04-05 07:50:02.804965 | orchestrator | -7 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 109 MiB 37 GiB 6.63 1.00 - host testbed-node-3 2026-04-05 07:50:02.805033 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 7 KiB 50 MiB 19 GiB 6.36 0.96 192 up osd.1 2026-04-05 07:50:02.805049 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 8 KiB 59 MiB 19 GiB 6.90 1.04 196 up osd.4 2026-04-05 07:50:02.805063 | orchestrator | -5 0.03897 - 40 GiB 2.7 GiB 2.5 GiB 15 KiB 112 MiB 37 GiB 6.64 1.00 - host testbed-node-4 2026-04-05 07:50:02.805075 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 8 KiB 58 MiB 19 GiB 5.75 0.87 174 up osd.0 2026-04-05 07:50:02.805087 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 7 KiB 54 MiB 18 GiB 7.53 1.14 218 up osd.3 2026-04-05 07:50:02.805099 | orchestrator | -3 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 108 MiB 37 GiB 6.63 1.00 - host testbed-node-5 2026-04-05 07:50:02.805111 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 8 KiB 50 MiB 19 GiB 6.67 1.00 195 up osd.2 2026-04-05 07:50:02.805123 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 7 KiB 58 MiB 19 GiB 6.59 0.99 195 up osd.5 2026-04-05 07:50:02.805136 | orchestrator | TOTAL 120 GiB 8.0 GiB 7.6 GiB 48 KiB 330 MiB 112 GiB 6.63 2026-04-05 07:50:02.805149 | orchestrator | MIN/MAX VAR: 0.87/1.14 STDDEV: 0.54 2026-04-05 07:50:02.848840 | orchestrator | 2026-04-05 07:50:02.848933 | orchestrator | # Ceph monitor status 2026-04-05 07:50:02.848948 | orchestrator | 2026-04-05 07:50:02.848961 | orchestrator | + echo 2026-04-05 07:50:02.848973 | orchestrator | + echo '# Ceph monitor status' 2026-04-05 07:50:02.848984 | orchestrator | + echo 2026-04-05 07:50:02.848995 | orchestrator | + ceph mon stat 2026-04-05 07:50:03.443906 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 30, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-05 07:50:03.490741 | orchestrator | 2026-04-05 07:50:03.490812 | orchestrator | # Ceph quorum status 2026-04-05 07:50:03.490822 | orchestrator | 2026-04-05 07:50:03.490829 | orchestrator | + echo 2026-04-05 07:50:03.490836 | orchestrator | + echo '# Ceph quorum status' 2026-04-05 07:50:03.490843 | orchestrator | + echo 2026-04-05 07:50:03.491154 | orchestrator | + ceph quorum_status 2026-04-05 07:50:03.491720 | orchestrator | + jq 2026-04-05 07:50:04.122739 | orchestrator | { 2026-04-05 07:50:04.122842 | orchestrator | "election_epoch": 30, 2026-04-05 07:50:04.122859 | orchestrator | "quorum": [ 2026-04-05 07:50:04.122872 | orchestrator | 0, 2026-04-05 07:50:04.122883 | orchestrator | 1, 2026-04-05 07:50:04.122894 | orchestrator | 2 2026-04-05 07:50:04.122904 | orchestrator | ], 2026-04-05 07:50:04.122915 | orchestrator | "quorum_names": [ 2026-04-05 07:50:04.122927 | orchestrator | "testbed-node-0", 2026-04-05 07:50:04.122964 | orchestrator | "testbed-node-1", 2026-04-05 07:50:04.122976 | orchestrator | "testbed-node-2" 2026-04-05 07:50:04.122987 | orchestrator | ], 2026-04-05 07:50:04.122998 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-05 07:50:04.123010 | orchestrator | "quorum_age": 8291, 2026-04-05 07:50:04.123021 | orchestrator | "features": { 2026-04-05 07:50:04.123032 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-05 07:50:04.123043 | orchestrator | "quorum_mon": [ 2026-04-05 07:50:04.123054 | orchestrator | "kraken", 2026-04-05 07:50:04.123065 | orchestrator | "luminous", 2026-04-05 07:50:04.123076 | orchestrator | "mimic", 2026-04-05 07:50:04.123093 | orchestrator | "osdmap-prune", 2026-04-05 07:50:04.123112 | orchestrator | "nautilus", 2026-04-05 07:50:04.123140 | orchestrator | "octopus", 2026-04-05 07:50:04.123161 | orchestrator | "pacific", 2026-04-05 07:50:04.123178 | orchestrator | "elector-pinging", 2026-04-05 07:50:04.123197 | orchestrator | "quincy", 2026-04-05 07:50:04.123214 | orchestrator | "reef" 2026-04-05 07:50:04.123233 | orchestrator | ] 2026-04-05 07:50:04.123249 | orchestrator | }, 2026-04-05 07:50:04.123266 | orchestrator | "monmap": { 2026-04-05 07:50:04.123285 | orchestrator | "epoch": 1, 2026-04-05 07:50:04.123303 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-05 07:50:04.123323 | orchestrator | "modified": "2026-04-05T03:03:05.197063Z", 2026-04-05 07:50:04.123341 | orchestrator | "created": "2026-04-05T03:03:05.197063Z", 2026-04-05 07:50:04.123361 | orchestrator | "min_mon_release": 18, 2026-04-05 07:50:04.123379 | orchestrator | "min_mon_release_name": "reef", 2026-04-05 07:50:04.123398 | orchestrator | "election_strategy": 1, 2026-04-05 07:50:04.123418 | orchestrator | "disallowed_leaders: ": "", 2026-04-05 07:50:04.123435 | orchestrator | "stretch_mode": false, 2026-04-05 07:50:04.123454 | orchestrator | "tiebreaker_mon": "", 2026-04-05 07:50:04.123473 | orchestrator | "removed_ranks: ": "", 2026-04-05 07:50:04.123492 | orchestrator | "features": { 2026-04-05 07:50:04.123511 | orchestrator | "persistent": [ 2026-04-05 07:50:04.123528 | orchestrator | "kraken", 2026-04-05 07:50:04.123539 | orchestrator | "luminous", 2026-04-05 07:50:04.123550 | orchestrator | "mimic", 2026-04-05 07:50:04.123561 | orchestrator | "osdmap-prune", 2026-04-05 07:50:04.123571 | orchestrator | "nautilus", 2026-04-05 07:50:04.123582 | orchestrator | "octopus", 2026-04-05 07:50:04.123593 | orchestrator | "pacific", 2026-04-05 07:50:04.123604 | orchestrator | "elector-pinging", 2026-04-05 07:50:04.123643 | orchestrator | "quincy", 2026-04-05 07:50:04.123655 | orchestrator | "reef" 2026-04-05 07:50:04.123666 | orchestrator | ], 2026-04-05 07:50:04.123677 | orchestrator | "optional": [] 2026-04-05 07:50:04.123688 | orchestrator | }, 2026-04-05 07:50:04.123700 | orchestrator | "mons": [ 2026-04-05 07:50:04.123711 | orchestrator | { 2026-04-05 07:50:04.123721 | orchestrator | "rank": 0, 2026-04-05 07:50:04.123732 | orchestrator | "name": "testbed-node-0", 2026-04-05 07:50:04.123743 | orchestrator | "public_addrs": { 2026-04-05 07:50:04.123755 | orchestrator | "addrvec": [ 2026-04-05 07:50:04.123765 | orchestrator | { 2026-04-05 07:50:04.123776 | orchestrator | "type": "v2", 2026-04-05 07:50:04.123787 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-05 07:50:04.123798 | orchestrator | "nonce": 0 2026-04-05 07:50:04.123809 | orchestrator | }, 2026-04-05 07:50:04.123820 | orchestrator | { 2026-04-05 07:50:04.123830 | orchestrator | "type": "v1", 2026-04-05 07:50:04.123841 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-05 07:50:04.123852 | orchestrator | "nonce": 0 2026-04-05 07:50:04.123863 | orchestrator | } 2026-04-05 07:50:04.123874 | orchestrator | ] 2026-04-05 07:50:04.123885 | orchestrator | }, 2026-04-05 07:50:04.123896 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-05 07:50:04.123906 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-05 07:50:04.123918 | orchestrator | "priority": 0, 2026-04-05 07:50:04.123929 | orchestrator | "weight": 0, 2026-04-05 07:50:04.123940 | orchestrator | "crush_location": "{}" 2026-04-05 07:50:04.123950 | orchestrator | }, 2026-04-05 07:50:04.123961 | orchestrator | { 2026-04-05 07:50:04.123972 | orchestrator | "rank": 1, 2026-04-05 07:50:04.123983 | orchestrator | "name": "testbed-node-1", 2026-04-05 07:50:04.123994 | orchestrator | "public_addrs": { 2026-04-05 07:50:04.124005 | orchestrator | "addrvec": [ 2026-04-05 07:50:04.124016 | orchestrator | { 2026-04-05 07:50:04.124027 | orchestrator | "type": "v2", 2026-04-05 07:50:04.124039 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-05 07:50:04.124067 | orchestrator | "nonce": 0 2026-04-05 07:50:04.124079 | orchestrator | }, 2026-04-05 07:50:04.124090 | orchestrator | { 2026-04-05 07:50:04.124101 | orchestrator | "type": "v1", 2026-04-05 07:50:04.124112 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-05 07:50:04.124123 | orchestrator | "nonce": 0 2026-04-05 07:50:04.124134 | orchestrator | } 2026-04-05 07:50:04.124145 | orchestrator | ] 2026-04-05 07:50:04.124156 | orchestrator | }, 2026-04-05 07:50:04.124167 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-05 07:50:04.124178 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-05 07:50:04.124189 | orchestrator | "priority": 0, 2026-04-05 07:50:04.124200 | orchestrator | "weight": 0, 2026-04-05 07:50:04.124211 | orchestrator | "crush_location": "{}" 2026-04-05 07:50:04.124228 | orchestrator | }, 2026-04-05 07:50:04.124252 | orchestrator | { 2026-04-05 07:50:04.124277 | orchestrator | "rank": 2, 2026-04-05 07:50:04.124295 | orchestrator | "name": "testbed-node-2", 2026-04-05 07:50:04.124312 | orchestrator | "public_addrs": { 2026-04-05 07:50:04.124330 | orchestrator | "addrvec": [ 2026-04-05 07:50:04.124347 | orchestrator | { 2026-04-05 07:50:04.124365 | orchestrator | "type": "v2", 2026-04-05 07:50:04.124383 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-05 07:50:04.124403 | orchestrator | "nonce": 0 2026-04-05 07:50:04.124421 | orchestrator | }, 2026-04-05 07:50:04.124440 | orchestrator | { 2026-04-05 07:50:04.124459 | orchestrator | "type": "v1", 2026-04-05 07:50:04.124477 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-05 07:50:04.124494 | orchestrator | "nonce": 0 2026-04-05 07:50:04.124513 | orchestrator | } 2026-04-05 07:50:04.124531 | orchestrator | ] 2026-04-05 07:50:04.124551 | orchestrator | }, 2026-04-05 07:50:04.124567 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-05 07:50:04.124582 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-05 07:50:04.124601 | orchestrator | "priority": 0, 2026-04-05 07:50:04.124704 | orchestrator | "weight": 0, 2026-04-05 07:50:04.124725 | orchestrator | "crush_location": "{}" 2026-04-05 07:50:04.124745 | orchestrator | } 2026-04-05 07:50:04.124764 | orchestrator | ] 2026-04-05 07:50:04.124782 | orchestrator | } 2026-04-05 07:50:04.124800 | orchestrator | } 2026-04-05 07:50:04.124818 | orchestrator | 2026-04-05 07:50:04.124838 | orchestrator | # Ceph free space status 2026-04-05 07:50:04.124857 | orchestrator | 2026-04-05 07:50:04.124875 | orchestrator | + echo 2026-04-05 07:50:04.124894 | orchestrator | + echo '# Ceph free space status' 2026-04-05 07:50:04.124914 | orchestrator | + echo 2026-04-05 07:50:04.124932 | orchestrator | + ceph df 2026-04-05 07:50:04.784676 | orchestrator | --- RAW STORAGE --- 2026-04-05 07:50:04.784770 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-05 07:50:04.784794 | orchestrator | hdd 120 GiB 112 GiB 8.0 GiB 8.0 GiB 6.63 2026-04-05 07:50:04.784804 | orchestrator | TOTAL 120 GiB 112 GiB 8.0 GiB 8.0 GiB 6.63 2026-04-05 07:50:04.784813 | orchestrator | 2026-04-05 07:50:04.784823 | orchestrator | --- POOLS --- 2026-04-05 07:50:04.784832 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-05 07:50:04.784842 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-05 07:50:04.784851 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-05 07:50:04.784860 | orchestrator | cephfs_metadata 3 16 9.5 KiB 22 113 KiB 0 35 GiB 2026-04-05 07:50:04.784869 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-05 07:50:04.784878 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-05 07:50:04.784886 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-05 07:50:04.784895 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-05 07:50:04.784904 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-05 07:50:04.784923 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 52 GiB 2026-04-05 07:50:04.784933 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 07:50:04.784942 | orchestrator | volumes 11 32 325 MiB 267 974 MiB 0.90 35 GiB 2026-04-05 07:50:04.784968 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-04-05 07:50:04.784978 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 07:50:04.784986 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-05 07:50:04.836909 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-05 07:50:04.899059 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-05 07:50:04.899149 | orchestrator | + osism apply facts 2026-04-05 07:50:06.245143 | orchestrator | 2026-04-05 07:50:06 | INFO  | Prepare task for execution of facts. 2026-04-05 07:50:06.306693 | orchestrator | 2026-04-05 07:50:06 | INFO  | Task ca822bde-5ee7-47f3-b3eb-464ae5dbd530 (facts) was prepared for execution. 2026-04-05 07:50:06.306770 | orchestrator | 2026-04-05 07:50:06 | INFO  | It takes a moment until task ca822bde-5ee7-47f3-b3eb-464ae5dbd530 (facts) has been started and output is visible here. 2026-04-05 07:50:28.331587 | orchestrator | 2026-04-05 07:50:28.331751 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-05 07:50:28.331767 | orchestrator | 2026-04-05 07:50:28.331780 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-05 07:50:28.331793 | orchestrator | Sunday 05 April 2026 07:50:11 +0000 (0:00:01.868) 0:00:01.868 ********** 2026-04-05 07:50:28.331804 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:50:28.331817 | orchestrator | ok: [testbed-manager] 2026-04-05 07:50:28.331828 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:50:28.331839 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:50:28.331850 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:50:28.331861 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:50:28.331872 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:50:28.331883 | orchestrator | 2026-04-05 07:50:28.331894 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-05 07:50:28.331906 | orchestrator | Sunday 05 April 2026 07:50:14 +0000 (0:00:03.259) 0:00:05.127 ********** 2026-04-05 07:50:28.331917 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:50:28.331929 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:50:28.331939 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:50:28.331950 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:50:28.331961 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:50:28.331972 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:50:28.331983 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:50:28.331994 | orchestrator | 2026-04-05 07:50:28.332005 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-05 07:50:28.332016 | orchestrator | 2026-04-05 07:50:28.332027 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-05 07:50:28.332038 | orchestrator | Sunday 05 April 2026 07:50:17 +0000 (0:00:03.003) 0:00:08.131 ********** 2026-04-05 07:50:28.332049 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:50:28.332060 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:50:28.332071 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:50:28.332082 | orchestrator | ok: [testbed-manager] 2026-04-05 07:50:28.332093 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:50:28.332104 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:50:28.332115 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:50:28.332126 | orchestrator | 2026-04-05 07:50:28.332139 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-05 07:50:28.332152 | orchestrator | 2026-04-05 07:50:28.332166 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-05 07:50:28.332179 | orchestrator | Sunday 05 April 2026 07:50:24 +0000 (0:00:07.279) 0:00:15.410 ********** 2026-04-05 07:50:28.332191 | orchestrator | skipping: [testbed-manager] 2026-04-05 07:50:28.332204 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:50:28.332217 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:50:28.332229 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:50:28.332242 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:50:28.332254 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:50:28.332293 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:50:28.332307 | orchestrator | 2026-04-05 07:50:28.332319 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:50:28.332333 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:50:28.332347 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:50:28.332359 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:50:28.332373 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:50:28.332385 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:50:28.332397 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:50:28.332410 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:50:28.332423 | orchestrator | 2026-04-05 07:50:28.332436 | orchestrator | 2026-04-05 07:50:28.332450 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:50:28.332463 | orchestrator | Sunday 05 April 2026 07:50:27 +0000 (0:00:03.155) 0:00:18.566 ********** 2026-04-05 07:50:28.332491 | orchestrator | =============================================================================== 2026-04-05 07:50:28.332502 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.28s 2026-04-05 07:50:28.332513 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.26s 2026-04-05 07:50:28.332524 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.16s 2026-04-05 07:50:28.332535 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 3.00s 2026-04-05 07:50:28.517366 | orchestrator | + osism validate ceph-mons 2026-04-05 07:51:38.765275 | orchestrator | 2026-04-05 07:51:38.765403 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-05 07:51:38.765424 | orchestrator | 2026-04-05 07:51:38.765437 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 07:51:38.765451 | orchestrator | Sunday 05 April 2026 07:50:45 +0000 (0:00:01.692) 0:00:01.692 ********** 2026-04-05 07:51:38.765465 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:51:38.765479 | orchestrator | 2026-04-05 07:51:38.765494 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 07:51:38.765510 | orchestrator | Sunday 05 April 2026 07:50:47 +0000 (0:00:02.717) 0:00:04.410 ********** 2026-04-05 07:51:38.765523 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:51:38.765537 | orchestrator | 2026-04-05 07:51:38.765550 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 07:51:38.765564 | orchestrator | Sunday 05 April 2026 07:50:49 +0000 (0:00:01.680) 0:00:06.091 ********** 2026-04-05 07:51:38.765624 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.765639 | orchestrator | 2026-04-05 07:51:38.765652 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 07:51:38.765664 | orchestrator | Sunday 05 April 2026 07:50:50 +0000 (0:00:01.174) 0:00:07.265 ********** 2026-04-05 07:51:38.765676 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.765690 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:51:38.765702 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:51:38.765716 | orchestrator | 2026-04-05 07:51:38.765729 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 07:51:38.765771 | orchestrator | Sunday 05 April 2026 07:50:52 +0000 (0:00:01.742) 0:00:09.007 ********** 2026-04-05 07:51:38.765786 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.765799 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:51:38.765813 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:51:38.765826 | orchestrator | 2026-04-05 07:51:38.765839 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 07:51:38.765853 | orchestrator | Sunday 05 April 2026 07:50:55 +0000 (0:00:02.567) 0:00:11.575 ********** 2026-04-05 07:51:38.765866 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.765880 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:51:38.765952 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:51:38.765968 | orchestrator | 2026-04-05 07:51:38.765982 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 07:51:38.765996 | orchestrator | Sunday 05 April 2026 07:50:56 +0000 (0:00:01.481) 0:00:13.057 ********** 2026-04-05 07:51:38.766008 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.766083 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:51:38.766099 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:51:38.766114 | orchestrator | 2026-04-05 07:51:38.766129 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 07:51:38.766153 | orchestrator | Sunday 05 April 2026 07:50:57 +0000 (0:00:01.412) 0:00:14.469 ********** 2026-04-05 07:51:38.766166 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.766180 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:51:38.766193 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:51:38.766206 | orchestrator | 2026-04-05 07:51:38.766219 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-05 07:51:38.766232 | orchestrator | Sunday 05 April 2026 07:50:59 +0000 (0:00:01.474) 0:00:15.944 ********** 2026-04-05 07:51:38.766246 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.766259 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:51:38.766272 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:51:38.766285 | orchestrator | 2026-04-05 07:51:38.766298 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-05 07:51:38.766312 | orchestrator | Sunday 05 April 2026 07:51:00 +0000 (0:00:01.494) 0:00:17.439 ********** 2026-04-05 07:51:38.766325 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.766338 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:51:38.766352 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:51:38.766365 | orchestrator | 2026-04-05 07:51:38.766378 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 07:51:38.766392 | orchestrator | Sunday 05 April 2026 07:51:02 +0000 (0:00:01.415) 0:00:18.855 ********** 2026-04-05 07:51:38.766405 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.766418 | orchestrator | 2026-04-05 07:51:38.766431 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 07:51:38.766445 | orchestrator | Sunday 05 April 2026 07:51:03 +0000 (0:00:01.297) 0:00:20.152 ********** 2026-04-05 07:51:38.766459 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.766472 | orchestrator | 2026-04-05 07:51:38.766486 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 07:51:38.766500 | orchestrator | Sunday 05 April 2026 07:51:04 +0000 (0:00:01.291) 0:00:21.444 ********** 2026-04-05 07:51:38.766513 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.766526 | orchestrator | 2026-04-05 07:51:38.766539 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:51:38.766552 | orchestrator | Sunday 05 April 2026 07:51:06 +0000 (0:00:01.305) 0:00:22.750 ********** 2026-04-05 07:51:38.766565 | orchestrator | 2026-04-05 07:51:38.766626 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:51:38.766641 | orchestrator | Sunday 05 April 2026 07:51:06 +0000 (0:00:00.469) 0:00:23.220 ********** 2026-04-05 07:51:38.766655 | orchestrator | 2026-04-05 07:51:38.766669 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:51:38.766683 | orchestrator | Sunday 05 April 2026 07:51:07 +0000 (0:00:00.649) 0:00:23.869 ********** 2026-04-05 07:51:38.766711 | orchestrator | 2026-04-05 07:51:38.766725 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 07:51:38.766739 | orchestrator | Sunday 05 April 2026 07:51:08 +0000 (0:00:00.825) 0:00:24.695 ********** 2026-04-05 07:51:38.766752 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.766765 | orchestrator | 2026-04-05 07:51:38.766779 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 07:51:38.766792 | orchestrator | Sunday 05 April 2026 07:51:09 +0000 (0:00:01.235) 0:00:25.931 ********** 2026-04-05 07:51:38.766806 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.766819 | orchestrator | 2026-04-05 07:51:38.766857 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-05 07:51:38.766872 | orchestrator | Sunday 05 April 2026 07:51:10 +0000 (0:00:01.325) 0:00:27.257 ********** 2026-04-05 07:51:38.766885 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.766899 | orchestrator | 2026-04-05 07:51:38.766912 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-05 07:51:38.766926 | orchestrator | Sunday 05 April 2026 07:51:11 +0000 (0:00:01.095) 0:00:28.352 ********** 2026-04-05 07:51:38.766939 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:51:38.766952 | orchestrator | 2026-04-05 07:51:38.766966 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-05 07:51:38.766979 | orchestrator | Sunday 05 April 2026 07:51:14 +0000 (0:00:02.874) 0:00:31.227 ********** 2026-04-05 07:51:38.766993 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.767006 | orchestrator | 2026-04-05 07:51:38.767019 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-05 07:51:38.767033 | orchestrator | Sunday 05 April 2026 07:51:16 +0000 (0:00:01.374) 0:00:32.601 ********** 2026-04-05 07:51:38.767047 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.767060 | orchestrator | 2026-04-05 07:51:38.767074 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-05 07:51:38.767087 | orchestrator | Sunday 05 April 2026 07:51:17 +0000 (0:00:01.119) 0:00:33.721 ********** 2026-04-05 07:51:38.767100 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.767113 | orchestrator | 2026-04-05 07:51:38.767127 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-05 07:51:38.767140 | orchestrator | Sunday 05 April 2026 07:51:18 +0000 (0:00:01.336) 0:00:35.058 ********** 2026-04-05 07:51:38.767154 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.767167 | orchestrator | 2026-04-05 07:51:38.767198 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-05 07:51:38.767212 | orchestrator | Sunday 05 April 2026 07:51:19 +0000 (0:00:01.287) 0:00:36.345 ********** 2026-04-05 07:51:38.767225 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.767238 | orchestrator | 2026-04-05 07:51:38.767252 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-05 07:51:38.767265 | orchestrator | Sunday 05 April 2026 07:51:20 +0000 (0:00:01.152) 0:00:37.498 ********** 2026-04-05 07:51:38.767278 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.767292 | orchestrator | 2026-04-05 07:51:38.767305 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-05 07:51:38.767318 | orchestrator | Sunday 05 April 2026 07:51:22 +0000 (0:00:01.103) 0:00:38.601 ********** 2026-04-05 07:51:38.767332 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.767345 | orchestrator | 2026-04-05 07:51:38.767358 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-05 07:51:38.767371 | orchestrator | Sunday 05 April 2026 07:51:23 +0000 (0:00:01.134) 0:00:39.736 ********** 2026-04-05 07:51:38.767385 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:51:38.767398 | orchestrator | 2026-04-05 07:51:38.767411 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-05 07:51:38.767425 | orchestrator | Sunday 05 April 2026 07:51:25 +0000 (0:00:02.305) 0:00:42.041 ********** 2026-04-05 07:51:38.767447 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.767458 | orchestrator | 2026-04-05 07:51:38.767472 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-05 07:51:38.767485 | orchestrator | Sunday 05 April 2026 07:51:26 +0000 (0:00:01.315) 0:00:43.356 ********** 2026-04-05 07:51:38.767498 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.767512 | orchestrator | 2026-04-05 07:51:38.767526 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-05 07:51:38.767540 | orchestrator | Sunday 05 April 2026 07:51:27 +0000 (0:00:01.123) 0:00:44.479 ********** 2026-04-05 07:51:38.767552 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:51:38.767567 | orchestrator | 2026-04-05 07:51:38.767605 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-05 07:51:38.767615 | orchestrator | Sunday 05 April 2026 07:51:29 +0000 (0:00:01.146) 0:00:45.625 ********** 2026-04-05 07:51:38.767622 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.767630 | orchestrator | 2026-04-05 07:51:38.767638 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-05 07:51:38.767646 | orchestrator | Sunday 05 April 2026 07:51:30 +0000 (0:00:01.177) 0:00:46.803 ********** 2026-04-05 07:51:38.767654 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.767662 | orchestrator | 2026-04-05 07:51:38.767670 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 07:51:38.767678 | orchestrator | Sunday 05 April 2026 07:51:31 +0000 (0:00:01.125) 0:00:47.928 ********** 2026-04-05 07:51:38.767686 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:51:38.767694 | orchestrator | 2026-04-05 07:51:38.767702 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 07:51:38.767710 | orchestrator | Sunday 05 April 2026 07:51:32 +0000 (0:00:01.263) 0:00:49.192 ********** 2026-04-05 07:51:38.767718 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:51:38.767726 | orchestrator | 2026-04-05 07:51:38.767733 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 07:51:38.767741 | orchestrator | Sunday 05 April 2026 07:51:33 +0000 (0:00:01.282) 0:00:50.474 ********** 2026-04-05 07:51:38.767749 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:51:38.767757 | orchestrator | 2026-04-05 07:51:38.767770 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 07:51:38.767779 | orchestrator | Sunday 05 April 2026 07:51:36 +0000 (0:00:02.937) 0:00:53.411 ********** 2026-04-05 07:51:38.767786 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:51:38.767794 | orchestrator | 2026-04-05 07:51:38.767802 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 07:51:38.767810 | orchestrator | Sunday 05 April 2026 07:51:38 +0000 (0:00:01.529) 0:00:54.941 ********** 2026-04-05 07:51:38.767818 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:51:38.767826 | orchestrator | 2026-04-05 07:51:38.767842 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:51:46.028701 | orchestrator | Sunday 05 April 2026 07:51:39 +0000 (0:00:01.307) 0:00:56.248 ********** 2026-04-05 07:51:46.028811 | orchestrator | 2026-04-05 07:51:46.028828 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:51:46.028841 | orchestrator | Sunday 05 April 2026 07:51:40 +0000 (0:00:00.493) 0:00:56.742 ********** 2026-04-05 07:51:46.028852 | orchestrator | 2026-04-05 07:51:46.028864 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:51:46.028875 | orchestrator | Sunday 05 April 2026 07:51:40 +0000 (0:00:00.448) 0:00:57.191 ********** 2026-04-05 07:51:46.028886 | orchestrator | 2026-04-05 07:51:46.028897 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 07:51:46.028908 | orchestrator | Sunday 05 April 2026 07:51:41 +0000 (0:00:00.835) 0:00:58.026 ********** 2026-04-05 07:51:46.028920 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:51:46.028956 | orchestrator | 2026-04-05 07:51:46.028968 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 07:51:46.028979 | orchestrator | Sunday 05 April 2026 07:51:43 +0000 (0:00:02.449) 0:01:00.475 ********** 2026-04-05 07:51:46.028990 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 07:51:46.029001 | orchestrator |  "msg": [ 2026-04-05 07:51:46.029013 | orchestrator |  "Validator run completed.", 2026-04-05 07:51:46.029025 | orchestrator |  "You can find the report file here:", 2026-04-05 07:51:46.029036 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-05T07:50:46+00:00-report.json", 2026-04-05 07:51:46.029048 | orchestrator |  "on the following host:", 2026-04-05 07:51:46.029059 | orchestrator |  "testbed-manager" 2026-04-05 07:51:46.029070 | orchestrator |  ] 2026-04-05 07:51:46.029082 | orchestrator | } 2026-04-05 07:51:46.029094 | orchestrator | 2026-04-05 07:51:46.029106 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:51:46.029119 | orchestrator | testbed-node-0 : ok=24  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-05 07:51:46.029131 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:51:46.029143 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:51:46.029154 | orchestrator | 2026-04-05 07:51:46.029166 | orchestrator | 2026-04-05 07:51:46.029177 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:51:46.029188 | orchestrator | Sunday 05 April 2026 07:51:45 +0000 (0:00:01.704) 0:01:02.179 ********** 2026-04-05 07:51:46.029199 | orchestrator | =============================================================================== 2026-04-05 07:51:46.029210 | orchestrator | Aggregate test results step one ----------------------------------------- 2.94s 2026-04-05 07:51:46.029224 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.87s 2026-04-05 07:51:46.029237 | orchestrator | Get timestamp for report file ------------------------------------------- 2.72s 2026-04-05 07:51:46.029250 | orchestrator | Get container info ------------------------------------------------------ 2.57s 2026-04-05 07:51:46.029263 | orchestrator | Write report file ------------------------------------------------------- 2.45s 2026-04-05 07:51:46.029276 | orchestrator | Gather status data ------------------------------------------------------ 2.31s 2026-04-05 07:51:46.029290 | orchestrator | Flush handlers ---------------------------------------------------------- 1.95s 2026-04-05 07:51:46.029302 | orchestrator | Flush handlers ---------------------------------------------------------- 1.78s 2026-04-05 07:51:46.029315 | orchestrator | Prepare test data for container existance test -------------------------- 1.74s 2026-04-05 07:51:46.029329 | orchestrator | Print report file information ------------------------------------------- 1.70s 2026-04-05 07:51:46.029341 | orchestrator | Create report output directory ------------------------------------------ 1.68s 2026-04-05 07:51:46.029354 | orchestrator | Aggregate test results step two ----------------------------------------- 1.53s 2026-04-05 07:51:46.029368 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 1.49s 2026-04-05 07:51:46.029381 | orchestrator | Set test result to failed if container is missing ----------------------- 1.48s 2026-04-05 07:51:46.029394 | orchestrator | Prepare test data ------------------------------------------------------- 1.48s 2026-04-05 07:51:46.029407 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 1.42s 2026-04-05 07:51:46.029420 | orchestrator | Set test result to passed if container is existing ---------------------- 1.41s 2026-04-05 07:51:46.029433 | orchestrator | Set quorum test data ---------------------------------------------------- 1.37s 2026-04-05 07:51:46.029446 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 1.34s 2026-04-05 07:51:46.029466 | orchestrator | Fail due to missing containers ------------------------------------------ 1.33s 2026-04-05 07:51:46.223532 | orchestrator | + osism validate ceph-mgrs 2026-04-05 07:52:49.220669 | orchestrator | 2026-04-05 07:52:49.220783 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-05 07:52:49.220800 | orchestrator | 2026-04-05 07:52:49.220813 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 07:52:49.220825 | orchestrator | Sunday 05 April 2026 07:52:02 +0000 (0:00:01.820) 0:00:01.820 ********** 2026-04-05 07:52:49.220837 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:52:49.220848 | orchestrator | 2026-04-05 07:52:49.220860 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 07:52:49.220871 | orchestrator | Sunday 05 April 2026 07:52:05 +0000 (0:00:02.712) 0:00:04.533 ********** 2026-04-05 07:52:49.220882 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:52:49.220893 | orchestrator | 2026-04-05 07:52:49.220904 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 07:52:49.220915 | orchestrator | Sunday 05 April 2026 07:52:07 +0000 (0:00:01.750) 0:00:06.283 ********** 2026-04-05 07:52:49.220926 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.220944 | orchestrator | 2026-04-05 07:52:49.220962 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-05 07:52:49.220979 | orchestrator | Sunday 05 April 2026 07:52:08 +0000 (0:00:01.095) 0:00:07.379 ********** 2026-04-05 07:52:49.221010 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.221030 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:52:49.221047 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:52:49.221064 | orchestrator | 2026-04-05 07:52:49.221081 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-05 07:52:49.221101 | orchestrator | Sunday 05 April 2026 07:52:10 +0000 (0:00:01.731) 0:00:09.111 ********** 2026-04-05 07:52:49.221118 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:52:49.221134 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:52:49.221152 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.221172 | orchestrator | 2026-04-05 07:52:49.221190 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-05 07:52:49.221210 | orchestrator | Sunday 05 April 2026 07:52:12 +0000 (0:00:02.551) 0:00:11.663 ********** 2026-04-05 07:52:49.221230 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.221251 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:52:49.221270 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:52:49.221290 | orchestrator | 2026-04-05 07:52:49.221309 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-05 07:52:49.221329 | orchestrator | Sunday 05 April 2026 07:52:14 +0000 (0:00:01.380) 0:00:13.043 ********** 2026-04-05 07:52:49.221347 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.221365 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:52:49.221383 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:52:49.221402 | orchestrator | 2026-04-05 07:52:49.221420 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 07:52:49.221439 | orchestrator | Sunday 05 April 2026 07:52:15 +0000 (0:00:01.413) 0:00:14.457 ********** 2026-04-05 07:52:49.221458 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.221477 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:52:49.221497 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:52:49.221514 | orchestrator | 2026-04-05 07:52:49.221532 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-05 07:52:49.221579 | orchestrator | Sunday 05 April 2026 07:52:16 +0000 (0:00:01.345) 0:00:15.803 ********** 2026-04-05 07:52:49.221602 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.221620 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:52:49.221638 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:52:49.221657 | orchestrator | 2026-04-05 07:52:49.221675 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-05 07:52:49.221731 | orchestrator | Sunday 05 April 2026 07:52:18 +0000 (0:00:01.341) 0:00:17.144 ********** 2026-04-05 07:52:49.221748 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.221760 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:52:49.221771 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:52:49.221782 | orchestrator | 2026-04-05 07:52:49.221793 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 07:52:49.221805 | orchestrator | Sunday 05 April 2026 07:52:19 +0000 (0:00:01.290) 0:00:18.435 ********** 2026-04-05 07:52:49.221816 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.221827 | orchestrator | 2026-04-05 07:52:49.221838 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 07:52:49.221849 | orchestrator | Sunday 05 April 2026 07:52:20 +0000 (0:00:01.291) 0:00:19.727 ********** 2026-04-05 07:52:49.221860 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.221871 | orchestrator | 2026-04-05 07:52:49.221882 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 07:52:49.221893 | orchestrator | Sunday 05 April 2026 07:52:22 +0000 (0:00:01.269) 0:00:20.996 ********** 2026-04-05 07:52:49.221904 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.221915 | orchestrator | 2026-04-05 07:52:49.221926 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:52:49.221937 | orchestrator | Sunday 05 April 2026 07:52:23 +0000 (0:00:01.291) 0:00:22.288 ********** 2026-04-05 07:52:49.221948 | orchestrator | 2026-04-05 07:52:49.221959 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:52:49.221969 | orchestrator | Sunday 05 April 2026 07:52:23 +0000 (0:00:00.428) 0:00:22.717 ********** 2026-04-05 07:52:49.221980 | orchestrator | 2026-04-05 07:52:49.221991 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:52:49.222003 | orchestrator | Sunday 05 April 2026 07:52:24 +0000 (0:00:00.477) 0:00:23.194 ********** 2026-04-05 07:52:49.222105 | orchestrator | 2026-04-05 07:52:49.222128 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 07:52:49.222158 | orchestrator | Sunday 05 April 2026 07:52:25 +0000 (0:00:00.981) 0:00:24.176 ********** 2026-04-05 07:52:49.222170 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.222181 | orchestrator | 2026-04-05 07:52:49.222192 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-05 07:52:49.222202 | orchestrator | Sunday 05 April 2026 07:52:26 +0000 (0:00:01.259) 0:00:25.435 ********** 2026-04-05 07:52:49.222213 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.222224 | orchestrator | 2026-04-05 07:52:49.222274 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-05 07:52:49.222287 | orchestrator | Sunday 05 April 2026 07:52:27 +0000 (0:00:01.235) 0:00:26.671 ********** 2026-04-05 07:52:49.222298 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.222309 | orchestrator | 2026-04-05 07:52:49.222320 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-05 07:52:49.222331 | orchestrator | Sunday 05 April 2026 07:52:28 +0000 (0:00:01.161) 0:00:27.833 ********** 2026-04-05 07:52:49.222342 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:52:49.222352 | orchestrator | 2026-04-05 07:52:49.222363 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-05 07:52:49.222374 | orchestrator | Sunday 05 April 2026 07:52:31 +0000 (0:00:02.967) 0:00:30.801 ********** 2026-04-05 07:52:49.222385 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.222396 | orchestrator | 2026-04-05 07:52:49.222407 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-05 07:52:49.222418 | orchestrator | Sunday 05 April 2026 07:52:33 +0000 (0:00:01.285) 0:00:32.087 ********** 2026-04-05 07:52:49.222429 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.222440 | orchestrator | 2026-04-05 07:52:49.222451 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-05 07:52:49.222461 | orchestrator | Sunday 05 April 2026 07:52:34 +0000 (0:00:01.299) 0:00:33.386 ********** 2026-04-05 07:52:49.222483 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.222494 | orchestrator | 2026-04-05 07:52:49.222506 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-05 07:52:49.222517 | orchestrator | Sunday 05 April 2026 07:52:35 +0000 (0:00:01.183) 0:00:34.569 ********** 2026-04-05 07:52:49.222528 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:52:49.222539 | orchestrator | 2026-04-05 07:52:49.222579 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 07:52:49.222599 | orchestrator | Sunday 05 April 2026 07:52:36 +0000 (0:00:01.156) 0:00:35.726 ********** 2026-04-05 07:52:49.222623 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:52:49.222652 | orchestrator | 2026-04-05 07:52:49.222669 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 07:52:49.222685 | orchestrator | Sunday 05 April 2026 07:52:38 +0000 (0:00:01.492) 0:00:37.219 ********** 2026-04-05 07:52:49.222702 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:52:49.222718 | orchestrator | 2026-04-05 07:52:49.222735 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 07:52:49.222752 | orchestrator | Sunday 05 April 2026 07:52:39 +0000 (0:00:01.451) 0:00:38.671 ********** 2026-04-05 07:52:49.222769 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:52:49.222787 | orchestrator | 2026-04-05 07:52:49.222867 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 07:52:49.222879 | orchestrator | Sunday 05 April 2026 07:52:42 +0000 (0:00:02.409) 0:00:41.081 ********** 2026-04-05 07:52:49.222890 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:52:49.222901 | orchestrator | 2026-04-05 07:52:49.222912 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 07:52:49.222923 | orchestrator | Sunday 05 April 2026 07:52:43 +0000 (0:00:01.283) 0:00:42.364 ********** 2026-04-05 07:52:49.222933 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:52:49.222944 | orchestrator | 2026-04-05 07:52:49.222955 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:52:49.222966 | orchestrator | Sunday 05 April 2026 07:52:44 +0000 (0:00:01.299) 0:00:43.663 ********** 2026-04-05 07:52:49.222977 | orchestrator | 2026-04-05 07:52:49.222987 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:52:49.222998 | orchestrator | Sunday 05 April 2026 07:52:45 +0000 (0:00:00.465) 0:00:44.128 ********** 2026-04-05 07:52:49.223009 | orchestrator | 2026-04-05 07:52:49.223020 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:52:49.223031 | orchestrator | Sunday 05 April 2026 07:52:45 +0000 (0:00:00.424) 0:00:44.553 ********** 2026-04-05 07:52:49.223041 | orchestrator | 2026-04-05 07:52:49.223053 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 07:52:49.223063 | orchestrator | Sunday 05 April 2026 07:52:46 +0000 (0:00:00.761) 0:00:45.314 ********** 2026-04-05 07:52:49.223074 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-05 07:52:49.223085 | orchestrator | 2026-04-05 07:52:49.223095 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 07:52:49.223106 | orchestrator | Sunday 05 April 2026 07:52:48 +0000 (0:00:02.346) 0:00:47.661 ********** 2026-04-05 07:52:49.223117 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-05 07:52:49.223128 | orchestrator |  "msg": [ 2026-04-05 07:52:49.223140 | orchestrator |  "Validator run completed.", 2026-04-05 07:52:49.223151 | orchestrator |  "You can find the report file here:", 2026-04-05 07:52:49.223162 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-05T07:52:03+00:00-report.json", 2026-04-05 07:52:49.223174 | orchestrator |  "on the following host:", 2026-04-05 07:52:49.223185 | orchestrator |  "testbed-manager" 2026-04-05 07:52:49.223196 | orchestrator |  ] 2026-04-05 07:52:49.223207 | orchestrator | } 2026-04-05 07:52:49.223227 | orchestrator | 2026-04-05 07:52:49.223238 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:52:49.223251 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 07:52:49.223264 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:52:49.223290 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:52:50.959868 | orchestrator | 2026-04-05 07:52:50.959981 | orchestrator | 2026-04-05 07:52:50.959997 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:52:50.960011 | orchestrator | Sunday 05 April 2026 07:52:50 +0000 (0:00:01.752) 0:00:49.413 ********** 2026-04-05 07:52:50.960022 | orchestrator | =============================================================================== 2026-04-05 07:52:50.960033 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.97s 2026-04-05 07:52:50.960045 | orchestrator | Get timestamp for report file ------------------------------------------- 2.71s 2026-04-05 07:52:50.960056 | orchestrator | Get container info ------------------------------------------------------ 2.55s 2026-04-05 07:52:50.960066 | orchestrator | Aggregate test results step one ----------------------------------------- 2.41s 2026-04-05 07:52:50.960077 | orchestrator | Write report file ------------------------------------------------------- 2.35s 2026-04-05 07:52:50.960088 | orchestrator | Flush handlers ---------------------------------------------------------- 1.89s 2026-04-05 07:52:50.960099 | orchestrator | Print report file information ------------------------------------------- 1.75s 2026-04-05 07:52:50.960109 | orchestrator | Create report output directory ------------------------------------------ 1.75s 2026-04-05 07:52:50.960120 | orchestrator | Prepare test data for container existance test -------------------------- 1.73s 2026-04-05 07:52:50.960131 | orchestrator | Flush handlers ---------------------------------------------------------- 1.65s 2026-04-05 07:52:50.960142 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.49s 2026-04-05 07:52:50.960152 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.45s 2026-04-05 07:52:50.960163 | orchestrator | Set test result to passed if container is existing ---------------------- 1.41s 2026-04-05 07:52:50.960174 | orchestrator | Set test result to failed if container is missing ----------------------- 1.38s 2026-04-05 07:52:50.960185 | orchestrator | Prepare test data ------------------------------------------------------- 1.34s 2026-04-05 07:52:50.960196 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 1.34s 2026-04-05 07:52:50.960207 | orchestrator | Aggregate test results step three --------------------------------------- 1.30s 2026-04-05 07:52:50.960218 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 1.30s 2026-04-05 07:52:50.960228 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2026-04-05 07:52:50.960239 | orchestrator | Aggregate test results step three --------------------------------------- 1.29s 2026-04-05 07:52:51.174579 | orchestrator | + osism validate ceph-osds 2026-04-05 07:53:23.597675 | orchestrator | 2026-04-05 07:53:23.597819 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-05 07:53:23.597848 | orchestrator | 2026-04-05 07:53:23.597867 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-05 07:53:23.597888 | orchestrator | Sunday 05 April 2026 07:53:07 +0000 (0:00:01.695) 0:00:01.695 ********** 2026-04-05 07:53:23.597907 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:53:23.597928 | orchestrator | 2026-04-05 07:53:23.597948 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-05 07:53:23.597969 | orchestrator | Sunday 05 April 2026 07:53:10 +0000 (0:00:02.902) 0:00:04.598 ********** 2026-04-05 07:53:23.597989 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:53:23.598130 | orchestrator | 2026-04-05 07:53:23.598177 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-05 07:53:23.598203 | orchestrator | Sunday 05 April 2026 07:53:11 +0000 (0:00:01.294) 0:00:05.893 ********** 2026-04-05 07:53:23.598226 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:53:23.598248 | orchestrator | 2026-04-05 07:53:23.598271 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-05 07:53:23.598293 | orchestrator | Sunday 05 April 2026 07:53:13 +0000 (0:00:01.743) 0:00:07.636 ********** 2026-04-05 07:53:23.598316 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:53:23.598341 | orchestrator | 2026-04-05 07:53:23.598365 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 07:53:23.598387 | orchestrator | Sunday 05 April 2026 07:53:14 +0000 (0:00:01.116) 0:00:08.753 ********** 2026-04-05 07:53:23.598410 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:53:23.598431 | orchestrator | 2026-04-05 07:53:23.598451 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 07:53:23.598471 | orchestrator | Sunday 05 April 2026 07:53:16 +0000 (0:00:01.184) 0:00:09.937 ********** 2026-04-05 07:53:23.598491 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:53:23.598509 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:53:23.598527 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:53:23.598546 | orchestrator | 2026-04-05 07:53:23.598612 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-05 07:53:23.598633 | orchestrator | Sunday 05 April 2026 07:53:17 +0000 (0:00:01.875) 0:00:11.813 ********** 2026-04-05 07:53:23.598651 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:53:23.598669 | orchestrator | 2026-04-05 07:53:23.598688 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-05 07:53:23.598705 | orchestrator | Sunday 05 April 2026 07:53:19 +0000 (0:00:01.136) 0:00:12.950 ********** 2026-04-05 07:53:23.598724 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:53:23.598742 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:53:23.598761 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:53:23.598779 | orchestrator | 2026-04-05 07:53:23.598797 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-05 07:53:23.598816 | orchestrator | Sunday 05 April 2026 07:53:20 +0000 (0:00:01.395) 0:00:14.346 ********** 2026-04-05 07:53:23.598835 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:53:23.598855 | orchestrator | 2026-04-05 07:53:23.598874 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 07:53:23.598904 | orchestrator | Sunday 05 April 2026 07:53:21 +0000 (0:00:01.398) 0:00:15.744 ********** 2026-04-05 07:53:23.598924 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:53:23.598943 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:53:23.598961 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:53:23.598979 | orchestrator | 2026-04-05 07:53:23.598998 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-05 07:53:23.599016 | orchestrator | Sunday 05 April 2026 07:53:23 +0000 (0:00:01.329) 0:00:17.073 ********** 2026-04-05 07:53:23.599036 | orchestrator | skipping: [testbed-node-3] => (item={'id': '49145e0e7812b4c69133e854e8ee3d6fc8f405cbc8434db89f7aca75811f390d', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-05 07:53:23.599051 | orchestrator | skipping: [testbed-node-3] => (item={'id': '43142d618407bd4d4c37b950c138955ce5ed0cb16164bff235d556e345f0a66d', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-05 07:53:23.599062 | orchestrator | skipping: [testbed-node-3] => (item={'id': '419e77fe8427a60d13bb438407d8cf66ed5d8a85b22eb374f102151b69d4548e', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-05 07:53:23.599092 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6c99873df40f75b2a67d3da4ece49338049182a8f838cf27e19bf006abc2371c', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-05 07:53:23.599104 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5d58e301deb28d8abbce282f153f5703de794249f5a0ddc3c94d13677a4d6ccd', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-04-05 07:53:23.599141 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af5c1fb5fef25c6cb61e9540a86440380a56fe5c9180b0468e03d0767563413e', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-05 07:53:23.599155 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a61c0fa1df4ac7aaeeee459b725fcd16a745fb542a75a65ddebb9992df6aab0b', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-05 07:53:23.599166 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd95222d8b745c2a2568a87b26a152b084d1c9e6a51cc4cb64f78d37d0ca4083e', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 07:53:23.599190 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f624366bd31567730a8efb808aac81dc2fc546301e50990520b1526e99c6f60', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 07:53:23.599201 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76d79e3845e4dbbbf397f11d71c618136f3b4a617a02449e27f9fba446cf3504', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 07:53:23.599212 | orchestrator | skipping: [testbed-node-3] => (item={'id': '164af5161f273e4795ea0ac7047853368d4baf1e361a394253851bb1618588ab', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 07:53:23.599226 | orchestrator | ok: [testbed-node-3] => (item={'id': '17ea7613efe517b78cc30dd8e93301712984394ee095cead5d8ed213c7c9fe28', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-05 07:53:23.599238 | orchestrator | ok: [testbed-node-3] => (item={'id': 'bc2e3b2e833be79d60cb6b51198041ba6d0e00b9554d98d0af50ea750b9f5eb4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-05 07:53:23.599254 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af6e5262807c21c0fdff937d9117289b994d4c986b92fb1219d5d71a8008e4c6', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.599266 | orchestrator | skipping: [testbed-node-3] => (item={'id': '99b846f96728f9d47f66ce783eadf70476994dbb93050905687d2a5db1b6f0dd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-05 07:53:23.599277 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0b441db2b9b06968335ee201d5e1faaa2ab8e2de3ba5a3d675317fb26b159073', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-05 07:53:23.599296 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9469da0c3b128be44c7528181e1686e5bddb491ceee511d2a54404a942357d51', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.599307 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6684ff2a56e966207c76ef39d1583a70eb4780bda818242986ef0bd9db26e785', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.599318 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5ab8ba7a6ccaeb7575b33a4ffe12d6d4a441b7ee7f37a8c53e9cbe344901513b', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.599329 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c39568f4db33359c5244ab3f405da5330f356f9d9c3ce2f0a57bba2f54848012', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-05 07:53:23.599348 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ba53d5f1313681c49b989129683b6e3f3ee161645c98deb39f11b383c39c8771', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-05 07:53:23.783451 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bbb23da129c897d75d00e0a5f3e12ce42c7f54d377fb169e9d480640bbb34bfe', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-05 07:53:23.783625 | orchestrator | skipping: [testbed-node-4] => (item={'id': '732e7dfc2e1a9279b2ad555589f5f383c9164ebe9e76337c7de6846b198b7f49', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-05 07:53:23.783658 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5fd977c2a9cbf729bddc3ab0658bbbcb37b9a3eb8615e2a4758c5623ebb6d5d7', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-04-05 07:53:23.783678 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2660c22d51e8d3e190cf8e02a5fdacf9fb48da903c01546228995a69e34d032b', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 57 minutes (healthy)'})  2026-04-05 07:53:23.783699 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b816ae163d06e16bf6ee33103cce33ac35f83295e5026a18fb816f93ea5158a3', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-05 07:53:23.783719 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3863748e2af062db6ac7abde81d945c4adfbaf1819b9b939573fba781c0e1534', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 07:53:23.783762 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7cca7182cb92fafe22bcd4fb50dad9041d19fc515f9fbb2d55e1a70ece12d751', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 07:53:23.783783 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd4b181586e000f636a9a623ded4ae2ac3e19904a1d0298be636d7ee13144c657', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 07:53:23.783828 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c2f5c4bd35075986d8bf59ffe5be9ba6209789cba34703ecf135c629bf21f021', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 07:53:23.783851 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c9a19f9813d85c849aabf849ce558ef17e46b9ee31676b80a9c4690c5c59dfe0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-05 07:53:23.783871 | orchestrator | ok: [testbed-node-4] => (item={'id': '16932dacb5e174d0cebe75847d14abb92ad8847b18e08f3e450ce570b0eab91c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-05 07:53:23.783890 | orchestrator | skipping: [testbed-node-4] => (item={'id': '12f2a677ae8772840b5a79202817b4b5cc3ab2c0fc421d5bc3ae76a66bcc1314', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.783907 | orchestrator | skipping: [testbed-node-4] => (item={'id': '52bf4f0602d5502a3cec9f1f6dd696b783b238930ec39f320a323e4a47176668', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-05 07:53:23.783926 | orchestrator | skipping: [testbed-node-4] => (item={'id': '16e018409e8440f1ec4acaedcfd125ca1f9cc4598be4a73a0d766eed6faf09dd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-05 07:53:23.783973 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5693dab5ace066e6ca243faab5941022d6e6152e38ca038e7abcb1affce11588', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.783995 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b2db1b3e7df2883ccdbb1220235417749c8d4653fefbf902a1878d194de8fe19', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.784015 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ee305bdea4b1cb69bafee317ef3e32a104fb6473a72b8079d1fc196497ad3c29', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:53:23.784033 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e0321832c87294f48e7c33a9601871e78aade7f69513045d7799de32ad7942e4', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-05 07:53:23.784053 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b07567addc739115f5f1f8e94c93c09bf781f872ad33b6a13fa64e90b0249f1c', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-05 07:53:23.784072 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cad92c44bc49bd16a9323f0180ea4067fbe000eb99805d01b08011fecbc72dea', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-05 07:53:23.784091 | orchestrator | skipping: [testbed-node-5] => (item={'id': '70f0dec8648abda9339e807cb914285606d81ac1c63820f8d233affb17f1f128', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-05 07:53:23.784119 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd1e6353283a902b8264781fe2f8b49234c496e6faf6b9063eb018ac64ca9dcdc', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-04-05 07:53:23.784151 | orchestrator | skipping: [testbed-node-5] => (item={'id': '988f0797d837645f04c3d95a8aecd10ce68feb19cd06c3feb5bd25d31f4849b1', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 57 minutes (healthy)'})  2026-04-05 07:53:23.784172 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6c286225037277572fcc4489fc251e56ba823dac6520bc77b1ee5b175d1be61d', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-05 07:53:23.784191 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dd81b046219daa239869f32c7edafdd1ccdf26f221b453d1c7ddd86d51403d65', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-05 07:53:23.784212 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e9060686cadffb6b6215971259addaf629275a1941d62cfc6c39a33032ccf7a9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 07:53:23.784233 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c08ca95ff6f082eb3e45144b6d998f0147fbf186c0f56381aac288a75322186d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-05 07:53:23.784253 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9788cc0eee79ed33ffc768bb82b995c4ee14faf025ebdd614a0dfd7fd77d457e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-05 07:53:23.784274 | orchestrator | ok: [testbed-node-5] => (item={'id': '78a2f90f190ef7bb565e32843c8ad80889a7f120967951961ef8fd46ba9953af', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-05 07:53:23.784307 | orchestrator | ok: [testbed-node-5] => (item={'id': '238fe07d97be65f2773396df5c6888678c3582d1cdcf5d602ab0492b1d1d3f40', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-05 07:54:00.917209 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'faf49fafd56e8530d3be1f1bfc83a99c586f5a88ff880581c7482fc2e963f625', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:54:00.917322 | orchestrator | skipping: [testbed-node-5] => (item={'id': '82a54b2a36aab25485d1032aa6fcf2e4970ea48f4cd28125b36b595eb53e33b7', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-05 07:54:00.917340 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5d83ddd75b93189eb8783e6c9ce2d74aecf0543d90609250cec0079e6da8be3a', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-05 07:54:00.917353 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6dc664fb6a32723500e982be87c758bdce2e032c1fe4eb405abe76f635931f08', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:54:00.917365 | orchestrator | skipping: [testbed-node-5] => (item={'id': '37a860d468d15854615fea915fbb5a69cdc0a5476bfd5ae175abefe9600efd84', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:54:00.917400 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4aadc100b3db3c34dd53fdac11625775d111f04fab0b71a515c5f571e2d8546f', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-05 07:54:00.917412 | orchestrator | 2026-04-05 07:54:00.917425 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-05 07:54:00.917437 | orchestrator | Sunday 05 April 2026 07:53:24 +0000 (0:00:01.708) 0:00:18.782 ********** 2026-04-05 07:54:00.917448 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.917460 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:00.917471 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:00.917482 | orchestrator | 2026-04-05 07:54:00.917493 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-05 07:54:00.917504 | orchestrator | Sunday 05 April 2026 07:53:26 +0000 (0:00:01.438) 0:00:20.221 ********** 2026-04-05 07:54:00.917515 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.917526 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:00.917537 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:00.917548 | orchestrator | 2026-04-05 07:54:00.917560 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-05 07:54:00.917571 | orchestrator | Sunday 05 April 2026 07:53:27 +0000 (0:00:01.361) 0:00:21.582 ********** 2026-04-05 07:54:00.917582 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.917593 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:00.917604 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:00.917615 | orchestrator | 2026-04-05 07:54:00.917627 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 07:54:00.917638 | orchestrator | Sunday 05 April 2026 07:53:29 +0000 (0:00:01.332) 0:00:22.915 ********** 2026-04-05 07:54:00.917675 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.917687 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:00.917697 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:00.917708 | orchestrator | 2026-04-05 07:54:00.917719 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-05 07:54:00.917731 | orchestrator | Sunday 05 April 2026 07:53:30 +0000 (0:00:01.672) 0:00:24.587 ********** 2026-04-05 07:54:00.917742 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-05 07:54:00.917755 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-05 07:54:00.917766 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.917795 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-05 07:54:00.917807 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-05 07:54:00.917818 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:00.917829 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-05 07:54:00.917840 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-05 07:54:00.917851 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:00.917862 | orchestrator | 2026-04-05 07:54:00.917873 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-05 07:54:00.917884 | orchestrator | Sunday 05 April 2026 07:53:32 +0000 (0:00:01.352) 0:00:25.940 ********** 2026-04-05 07:54:00.917895 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.917906 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:00.917917 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:00.917928 | orchestrator | 2026-04-05 07:54:00.917939 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 07:54:00.917950 | orchestrator | Sunday 05 April 2026 07:53:33 +0000 (0:00:01.310) 0:00:27.250 ********** 2026-04-05 07:54:00.917978 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918000 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:00.918011 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:00.918083 | orchestrator | 2026-04-05 07:54:00.918095 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-05 07:54:00.918106 | orchestrator | Sunday 05 April 2026 07:53:34 +0000 (0:00:01.500) 0:00:28.751 ********** 2026-04-05 07:54:00.918118 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918129 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:00.918140 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:00.918151 | orchestrator | 2026-04-05 07:54:00.918162 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-05 07:54:00.918173 | orchestrator | Sunday 05 April 2026 07:53:36 +0000 (0:00:01.385) 0:00:30.136 ********** 2026-04-05 07:54:00.918184 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.918196 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:00.918207 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:00.918218 | orchestrator | 2026-04-05 07:54:00.918229 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 07:54:00.918240 | orchestrator | Sunday 05 April 2026 07:53:37 +0000 (0:00:01.316) 0:00:31.453 ********** 2026-04-05 07:54:00.918251 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918262 | orchestrator | 2026-04-05 07:54:00.918274 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 07:54:00.918285 | orchestrator | Sunday 05 April 2026 07:53:38 +0000 (0:00:01.244) 0:00:32.698 ********** 2026-04-05 07:54:00.918296 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918307 | orchestrator | 2026-04-05 07:54:00.918318 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 07:54:00.918330 | orchestrator | Sunday 05 April 2026 07:53:40 +0000 (0:00:01.248) 0:00:33.946 ********** 2026-04-05 07:54:00.918341 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918352 | orchestrator | 2026-04-05 07:54:00.918363 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:54:00.918374 | orchestrator | Sunday 05 April 2026 07:53:41 +0000 (0:00:01.311) 0:00:35.257 ********** 2026-04-05 07:54:00.918385 | orchestrator | 2026-04-05 07:54:00.918397 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:54:00.918408 | orchestrator | Sunday 05 April 2026 07:53:41 +0000 (0:00:00.644) 0:00:35.902 ********** 2026-04-05 07:54:00.918419 | orchestrator | 2026-04-05 07:54:00.918430 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:54:00.918441 | orchestrator | Sunday 05 April 2026 07:53:42 +0000 (0:00:00.449) 0:00:36.352 ********** 2026-04-05 07:54:00.918452 | orchestrator | 2026-04-05 07:54:00.918469 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 07:54:00.918481 | orchestrator | Sunday 05 April 2026 07:53:43 +0000 (0:00:00.812) 0:00:37.165 ********** 2026-04-05 07:54:00.918492 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918503 | orchestrator | 2026-04-05 07:54:00.918514 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-05 07:54:00.918526 | orchestrator | Sunday 05 April 2026 07:53:44 +0000 (0:00:01.265) 0:00:38.431 ********** 2026-04-05 07:54:00.918537 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918548 | orchestrator | 2026-04-05 07:54:00.918559 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 07:54:00.918570 | orchestrator | Sunday 05 April 2026 07:53:45 +0000 (0:00:01.284) 0:00:39.715 ********** 2026-04-05 07:54:00.918581 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.918592 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:00.918603 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:00.918614 | orchestrator | 2026-04-05 07:54:00.918626 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-05 07:54:00.918637 | orchestrator | Sunday 05 April 2026 07:53:47 +0000 (0:00:01.421) 0:00:41.137 ********** 2026-04-05 07:54:00.918669 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.918689 | orchestrator | 2026-04-05 07:54:00.918700 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-05 07:54:00.918711 | orchestrator | Sunday 05 April 2026 07:53:48 +0000 (0:00:01.240) 0:00:42.377 ********** 2026-04-05 07:54:00.918722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-05 07:54:00.918733 | orchestrator | 2026-04-05 07:54:00.918744 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-05 07:54:00.918755 | orchestrator | Sunday 05 April 2026 07:53:52 +0000 (0:00:03.691) 0:00:46.069 ********** 2026-04-05 07:54:00.918767 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.918778 | orchestrator | 2026-04-05 07:54:00.918789 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-05 07:54:00.918800 | orchestrator | Sunday 05 April 2026 07:53:53 +0000 (0:00:01.148) 0:00:47.218 ********** 2026-04-05 07:54:00.918811 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.918822 | orchestrator | 2026-04-05 07:54:00.918833 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-05 07:54:00.918844 | orchestrator | Sunday 05 April 2026 07:53:54 +0000 (0:00:01.292) 0:00:48.510 ********** 2026-04-05 07:54:00.918855 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:00.918866 | orchestrator | 2026-04-05 07:54:00.918877 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-05 07:54:00.918888 | orchestrator | Sunday 05 April 2026 07:53:55 +0000 (0:00:01.169) 0:00:49.679 ********** 2026-04-05 07:54:00.918899 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.918910 | orchestrator | 2026-04-05 07:54:00.918921 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 07:54:00.918932 | orchestrator | Sunday 05 April 2026 07:53:56 +0000 (0:00:01.150) 0:00:50.829 ********** 2026-04-05 07:54:00.918943 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:00.918954 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:00.918965 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:00.918976 | orchestrator | 2026-04-05 07:54:00.918987 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-05 07:54:00.918998 | orchestrator | Sunday 05 April 2026 07:53:58 +0000 (0:00:01.415) 0:00:52.245 ********** 2026-04-05 07:54:00.919010 | orchestrator | changed: [testbed-node-3] 2026-04-05 07:54:00.919021 | orchestrator | changed: [testbed-node-4] 2026-04-05 07:54:00.919039 | orchestrator | changed: [testbed-node-5] 2026-04-05 07:54:32.019195 | orchestrator | 2026-04-05 07:54:32.019303 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-05 07:54:32.019319 | orchestrator | Sunday 05 April 2026 07:54:02 +0000 (0:00:03.687) 0:00:55.933 ********** 2026-04-05 07:54:32.019330 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:32.019340 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:32.019350 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:32.019360 | orchestrator | 2026-04-05 07:54:32.019371 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-05 07:54:32.019381 | orchestrator | Sunday 05 April 2026 07:54:03 +0000 (0:00:01.409) 0:00:57.342 ********** 2026-04-05 07:54:32.019391 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:32.019401 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:32.019411 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:32.019420 | orchestrator | 2026-04-05 07:54:32.019430 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-05 07:54:32.019440 | orchestrator | Sunday 05 April 2026 07:54:04 +0000 (0:00:01.546) 0:00:58.889 ********** 2026-04-05 07:54:32.019450 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:32.019461 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:32.019471 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:32.019481 | orchestrator | 2026-04-05 07:54:32.019491 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-05 07:54:32.019501 | orchestrator | Sunday 05 April 2026 07:54:06 +0000 (0:00:01.392) 0:01:00.281 ********** 2026-04-05 07:54:32.019535 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:32.019545 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:32.019555 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:32.019565 | orchestrator | 2026-04-05 07:54:32.019574 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-05 07:54:32.019584 | orchestrator | Sunday 05 April 2026 07:54:07 +0000 (0:00:01.379) 0:01:01.661 ********** 2026-04-05 07:54:32.019594 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:32.019604 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:32.019613 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:32.019623 | orchestrator | 2026-04-05 07:54:32.019633 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-05 07:54:32.019643 | orchestrator | Sunday 05 April 2026 07:54:09 +0000 (0:00:01.551) 0:01:03.213 ********** 2026-04-05 07:54:32.019652 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:32.019662 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:32.019672 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:32.019681 | orchestrator | 2026-04-05 07:54:32.019691 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-05 07:54:32.019737 | orchestrator | Sunday 05 April 2026 07:54:10 +0000 (0:00:01.350) 0:01:04.563 ********** 2026-04-05 07:54:32.019751 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:32.019763 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:32.019775 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:32.019787 | orchestrator | 2026-04-05 07:54:32.019798 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-05 07:54:32.019809 | orchestrator | Sunday 05 April 2026 07:54:12 +0000 (0:00:01.518) 0:01:06.082 ********** 2026-04-05 07:54:32.019821 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:32.019833 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:32.019844 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:32.019856 | orchestrator | 2026-04-05 07:54:32.019868 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-05 07:54:32.019880 | orchestrator | Sunday 05 April 2026 07:54:13 +0000 (0:00:01.588) 0:01:07.671 ********** 2026-04-05 07:54:32.019890 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:32.019900 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:32.019909 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:32.019919 | orchestrator | 2026-04-05 07:54:32.019929 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-05 07:54:32.019939 | orchestrator | Sunday 05 April 2026 07:54:15 +0000 (0:00:01.326) 0:01:08.997 ********** 2026-04-05 07:54:32.019949 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:32.019959 | orchestrator | skipping: [testbed-node-4] 2026-04-05 07:54:32.019969 | orchestrator | skipping: [testbed-node-5] 2026-04-05 07:54:32.019978 | orchestrator | 2026-04-05 07:54:32.019988 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-05 07:54:32.019998 | orchestrator | Sunday 05 April 2026 07:54:16 +0000 (0:00:01.340) 0:01:10.338 ********** 2026-04-05 07:54:32.020008 | orchestrator | ok: [testbed-node-3] 2026-04-05 07:54:32.020018 | orchestrator | ok: [testbed-node-4] 2026-04-05 07:54:32.020028 | orchestrator | ok: [testbed-node-5] 2026-04-05 07:54:32.020037 | orchestrator | 2026-04-05 07:54:32.020047 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-05 07:54:32.020057 | orchestrator | Sunday 05 April 2026 07:54:17 +0000 (0:00:01.345) 0:01:11.683 ********** 2026-04-05 07:54:32.020066 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:54:32.020077 | orchestrator | 2026-04-05 07:54:32.020087 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-05 07:54:32.020096 | orchestrator | Sunday 05 April 2026 07:54:19 +0000 (0:00:01.266) 0:01:12.949 ********** 2026-04-05 07:54:32.020106 | orchestrator | skipping: [testbed-node-3] 2026-04-05 07:54:32.020116 | orchestrator | 2026-04-05 07:54:32.020126 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-05 07:54:32.020136 | orchestrator | Sunday 05 April 2026 07:54:20 +0000 (0:00:01.497) 0:01:14.447 ********** 2026-04-05 07:54:32.020154 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:54:32.020164 | orchestrator | 2026-04-05 07:54:32.020174 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-05 07:54:32.020184 | orchestrator | Sunday 05 April 2026 07:54:23 +0000 (0:00:02.698) 0:01:17.146 ********** 2026-04-05 07:54:32.020194 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:54:32.020203 | orchestrator | 2026-04-05 07:54:32.020213 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-05 07:54:32.020223 | orchestrator | Sunday 05 April 2026 07:54:24 +0000 (0:00:01.258) 0:01:18.404 ********** 2026-04-05 07:54:32.020233 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:54:32.020243 | orchestrator | 2026-04-05 07:54:32.020268 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:54:32.020279 | orchestrator | Sunday 05 April 2026 07:54:25 +0000 (0:00:01.297) 0:01:19.701 ********** 2026-04-05 07:54:32.020289 | orchestrator | 2026-04-05 07:54:32.020299 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:54:32.020309 | orchestrator | Sunday 05 April 2026 07:54:26 +0000 (0:00:00.462) 0:01:20.164 ********** 2026-04-05 07:54:32.020319 | orchestrator | 2026-04-05 07:54:32.020329 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-05 07:54:32.020339 | orchestrator | Sunday 05 April 2026 07:54:26 +0000 (0:00:00.442) 0:01:20.606 ********** 2026-04-05 07:54:32.020348 | orchestrator | 2026-04-05 07:54:32.020358 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-05 07:54:32.020368 | orchestrator | Sunday 05 April 2026 07:54:27 +0000 (0:00:00.788) 0:01:21.394 ********** 2026-04-05 07:54:32.020378 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-05 07:54:32.020388 | orchestrator | 2026-04-05 07:54:32.020398 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-05 07:54:32.020408 | orchestrator | Sunday 05 April 2026 07:54:29 +0000 (0:00:02.379) 0:01:23.774 ********** 2026-04-05 07:54:32.020418 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-05 07:54:32.020428 | orchestrator |  "msg": [ 2026-04-05 07:54:32.020438 | orchestrator |  "Validator run completed.", 2026-04-05 07:54:32.020448 | orchestrator |  "You can find the report file here:", 2026-04-05 07:54:32.020458 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-05T07:53:08+00:00-report.json", 2026-04-05 07:54:32.020469 | orchestrator |  "on the following host:", 2026-04-05 07:54:32.020479 | orchestrator |  "testbed-manager" 2026-04-05 07:54:32.020489 | orchestrator |  ] 2026-04-05 07:54:32.020500 | orchestrator | } 2026-04-05 07:54:32.020510 | orchestrator | 2026-04-05 07:54:32.020520 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:54:32.020531 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-05 07:54:32.020542 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 07:54:32.020557 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-05 07:54:32.020568 | orchestrator | 2026-04-05 07:54:32.020578 | orchestrator | 2026-04-05 07:54:32.020588 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:54:32.020598 | orchestrator | Sunday 05 April 2026 07:54:31 +0000 (0:00:01.745) 0:01:25.520 ********** 2026-04-05 07:54:32.020608 | orchestrator | =============================================================================== 2026-04-05 07:54:32.020617 | orchestrator | Get ceph osd tree ------------------------------------------------------- 3.69s 2026-04-05 07:54:32.020635 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 3.69s 2026-04-05 07:54:32.020645 | orchestrator | Get timestamp for report file ------------------------------------------- 2.90s 2026-04-05 07:54:32.020655 | orchestrator | Aggregate test results step one ----------------------------------------- 2.70s 2026-04-05 07:54:32.020665 | orchestrator | Write report file ------------------------------------------------------- 2.38s 2026-04-05 07:54:32.020674 | orchestrator | Flush handlers ---------------------------------------------------------- 1.91s 2026-04-05 07:54:32.020684 | orchestrator | Calculate OSD devices for each host ------------------------------------- 1.88s 2026-04-05 07:54:32.020694 | orchestrator | Print report file information ------------------------------------------- 1.75s 2026-04-05 07:54:32.020704 | orchestrator | Create report output directory ------------------------------------------ 1.74s 2026-04-05 07:54:32.020751 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 1.71s 2026-04-05 07:54:32.020773 | orchestrator | Flush handlers ---------------------------------------------------------- 1.69s 2026-04-05 07:54:32.020784 | orchestrator | Prepare test data ------------------------------------------------------- 1.67s 2026-04-05 07:54:32.020794 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.59s 2026-04-05 07:54:32.020804 | orchestrator | Fail if count of unencrypted OSDs does not match ------------------------ 1.55s 2026-04-05 07:54:32.020814 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 1.55s 2026-04-05 07:54:32.020824 | orchestrator | Prepare test data ------------------------------------------------------- 1.52s 2026-04-05 07:54:32.020834 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 1.50s 2026-04-05 07:54:32.020844 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.50s 2026-04-05 07:54:32.020854 | orchestrator | Get count of ceph-osd containers on host -------------------------------- 1.44s 2026-04-05 07:54:32.020864 | orchestrator | Prepare test data ------------------------------------------------------- 1.42s 2026-04-05 07:54:32.208756 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-05 07:54:32.217417 | orchestrator | + set -e 2026-04-05 07:54:32.217533 | orchestrator | + source /opt/manager-vars.sh 2026-04-05 07:54:32.217547 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-05 07:54:32.217556 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-05 07:54:32.217565 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-05 07:54:32.217574 | orchestrator | ++ CEPH_VERSION=reef 2026-04-05 07:54:32.217583 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-05 07:54:32.217594 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-05 07:54:32.217603 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-05 07:54:32.217612 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-05 07:54:32.217622 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-05 07:54:32.217631 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-05 07:54:32.217640 | orchestrator | ++ export ARA=false 2026-04-05 07:54:32.217649 | orchestrator | ++ ARA=false 2026-04-05 07:54:32.217658 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-05 07:54:32.217667 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-05 07:54:32.217676 | orchestrator | ++ export TEMPEST=false 2026-04-05 07:54:32.217685 | orchestrator | ++ TEMPEST=false 2026-04-05 07:54:32.217695 | orchestrator | ++ export IS_ZUUL=true 2026-04-05 07:54:32.217704 | orchestrator | ++ IS_ZUUL=true 2026-04-05 07:54:32.217713 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 07:54:32.217741 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.238 2026-04-05 07:54:32.217750 | orchestrator | ++ export EXTERNAL_API=false 2026-04-05 07:54:32.217759 | orchestrator | ++ EXTERNAL_API=false 2026-04-05 07:54:32.217767 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-05 07:54:32.217776 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-05 07:54:32.217785 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-05 07:54:32.217794 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-05 07:54:32.217803 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-05 07:54:32.217812 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-05 07:54:32.217821 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-05 07:54:32.217829 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-05 07:54:32.217838 | orchestrator | + source /etc/os-release 2026-04-05 07:54:32.217847 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-05 07:54:32.217856 | orchestrator | ++ NAME=Ubuntu 2026-04-05 07:54:32.217889 | orchestrator | ++ VERSION_ID=24.04 2026-04-05 07:54:32.217899 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-05 07:54:32.217908 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-05 07:54:32.217916 | orchestrator | ++ ID=ubuntu 2026-04-05 07:54:32.217925 | orchestrator | ++ ID_LIKE=debian 2026-04-05 07:54:32.217934 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-05 07:54:32.217943 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-05 07:54:32.217952 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-05 07:54:32.217961 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-05 07:54:32.217972 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-05 07:54:32.217981 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-05 07:54:32.217990 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-05 07:54:32.218000 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-05 07:54:32.218066 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-05 07:54:32.247277 | orchestrator | 2026-04-05 07:54:32.247358 | orchestrator | # Status of Elasticsearch 2026-04-05 07:54:32.247372 | orchestrator | 2026-04-05 07:54:32.247384 | orchestrator | + pushd /opt/configuration/contrib 2026-04-05 07:54:32.247397 | orchestrator | + echo 2026-04-05 07:54:32.247410 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-05 07:54:32.247421 | orchestrator | + echo 2026-04-05 07:54:32.247433 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-05 07:54:32.439081 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-05 07:54:32.439193 | orchestrator | 2026-04-05 07:54:32.439214 | orchestrator | # Status of MariaDB 2026-04-05 07:54:32.439228 | orchestrator | 2026-04-05 07:54:32.439242 | orchestrator | + echo 2026-04-05 07:54:32.439350 | orchestrator | + echo '# Status of MariaDB' 2026-04-05 07:54:32.439363 | orchestrator | + echo 2026-04-05 07:54:32.439375 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-05 07:54:32.494608 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 07:54:32.494706 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-05 07:54:32.494798 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-05 07:54:32.494814 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-05 07:54:32.565601 | orchestrator | Reading package lists... 2026-04-05 07:54:32.891606 | orchestrator | Building dependency tree... 2026-04-05 07:54:32.892093 | orchestrator | Reading state information... 2026-04-05 07:54:33.241231 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-05 07:54:33.241322 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-04-05 07:54:33.937438 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-05 07:54:33.938255 | orchestrator | 2026-04-05 07:54:33.938307 | orchestrator | # Status of Prometheus 2026-04-05 07:54:33.938330 | orchestrator | 2026-04-05 07:54:33.938351 | orchestrator | + echo 2026-04-05 07:54:33.938370 | orchestrator | + echo '# Status of Prometheus' 2026-04-05 07:54:33.938389 | orchestrator | + echo 2026-04-05 07:54:33.938403 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-05 07:54:34.001189 | orchestrator | Unauthorized 2026-04-05 07:54:34.004258 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-05 07:54:34.067142 | orchestrator | Unauthorized 2026-04-05 07:54:34.071051 | orchestrator | 2026-04-05 07:54:34.071167 | orchestrator | # Status of RabbitMQ 2026-04-05 07:54:34.071200 | orchestrator | 2026-04-05 07:54:34.071239 | orchestrator | + echo 2026-04-05 07:54:34.071262 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-05 07:54:34.071291 | orchestrator | + echo 2026-04-05 07:54:34.073037 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-05 07:54:34.127366 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-05 07:54:34.127487 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-05 07:54:34.127505 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-05 07:54:34.771710 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-05 07:54:34.781422 | orchestrator | 2026-04-05 07:54:34.781493 | orchestrator | # Status of Redis 2026-04-05 07:54:34.781500 | orchestrator | 2026-04-05 07:54:34.781523 | orchestrator | + echo 2026-04-05 07:54:34.781529 | orchestrator | + echo '# Status of Redis' 2026-04-05 07:54:34.781534 | orchestrator | + echo 2026-04-05 07:54:34.781541 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-05 07:54:34.788665 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002190s;;;0.000000;10.000000 2026-04-05 07:54:34.789150 | orchestrator | + popd 2026-04-05 07:54:34.789176 | orchestrator | 2026-04-05 07:54:34.789187 | orchestrator | # Create backup of MariaDB database 2026-04-05 07:54:34.789199 | orchestrator | 2026-04-05 07:54:34.789211 | orchestrator | + echo 2026-04-05 07:54:34.789223 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-05 07:54:34.789234 | orchestrator | + echo 2026-04-05 07:54:34.789245 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-05 07:54:36.072250 | orchestrator | 2026-04-05 07:54:36 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-05 07:54:36.136150 | orchestrator | 2026-04-05 07:54:36 | INFO  | Task e601524d-ed27-4630-9fb6-1a51648ad0d2 (mariadb_backup) was prepared for execution. 2026-04-05 07:54:36.136243 | orchestrator | 2026-04-05 07:54:36 | INFO  | It takes a moment until task e601524d-ed27-4630-9fb6-1a51648ad0d2 (mariadb_backup) has been started and output is visible here. 2026-04-05 07:55:45.348602 | orchestrator | 2026-04-05 07:55:45.348722 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-05 07:55:45.348739 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-05 07:55:45.348752 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-05 07:55:45.348775 | orchestrator | 2026-04-05 07:55:45.348787 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-05 07:55:45.348798 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-05 07:55:45.348809 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-05 07:55:45.348831 | orchestrator | Sunday 05 April 2026 07:54:40 +0000 (0:00:01.428) 0:00:01.428 ********** 2026-04-05 07:55:45.348842 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:55:45.348854 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:55:45.348916 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:55:45.348928 | orchestrator | 2026-04-05 07:55:45.348939 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-05 07:55:45.348950 | orchestrator | Sunday 05 April 2026 07:54:41 +0000 (0:00:00.780) 0:00:02.209 ********** 2026-04-05 07:55:45.348961 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-05 07:55:45.348973 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-05 07:55:45.348984 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-05 07:55:45.348995 | orchestrator | 2026-04-05 07:55:45.349006 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-05 07:55:45.349017 | orchestrator | 2026-04-05 07:55:45.349028 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-05 07:55:45.349039 | orchestrator | Sunday 05 April 2026 07:54:42 +0000 (0:00:00.801) 0:00:03.010 ********** 2026-04-05 07:55:45.349050 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-05 07:55:45.349061 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-05 07:55:45.349072 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-05 07:55:45.349083 | orchestrator | 2026-04-05 07:55:45.349094 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-05 07:55:45.349105 | orchestrator | Sunday 05 April 2026 07:54:42 +0000 (0:00:00.411) 0:00:03.422 ********** 2026-04-05 07:55:45.349117 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-05 07:55:45.349205 | orchestrator | 2026-04-05 07:55:45.349221 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-05 07:55:45.349234 | orchestrator | Sunday 05 April 2026 07:54:44 +0000 (0:00:01.180) 0:00:04.602 ********** 2026-04-05 07:55:45.349245 | orchestrator | ok: [testbed-node-0] 2026-04-05 07:55:45.349256 | orchestrator | ok: [testbed-node-1] 2026-04-05 07:55:45.349267 | orchestrator | ok: [testbed-node-2] 2026-04-05 07:55:45.349278 | orchestrator | 2026-04-05 07:55:45.349289 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-05 07:55:45.349300 | orchestrator | Sunday 05 April 2026 07:54:49 +0000 (0:00:05.012) 0:00:09.615 ********** 2026-04-05 07:55:45.349312 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:55:45.349323 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:55:45.349334 | orchestrator | changed: [testbed-node-0] 2026-04-05 07:55:45.349346 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-05 07:55:45.349357 | orchestrator | 2026-04-05 07:55:45.349368 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-05 07:55:45.349379 | orchestrator | skipping: no hosts matched 2026-04-05 07:55:45.349390 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-05 07:55:45.349401 | orchestrator | 2026-04-05 07:55:45.349412 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-05 07:55:45.349422 | orchestrator | skipping: no hosts matched 2026-04-05 07:55:45.349433 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-05 07:55:45.349444 | orchestrator | mariadb_bootstrap_restart 2026-04-05 07:55:45.349455 | orchestrator | 2026-04-05 07:55:45.349466 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-05 07:55:45.349477 | orchestrator | skipping: no hosts matched 2026-04-05 07:55:45.349488 | orchestrator | 2026-04-05 07:55:45.349499 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-05 07:55:45.349510 | orchestrator | 2026-04-05 07:55:45.349521 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-05 07:55:45.349532 | orchestrator | Sunday 05 April 2026 07:55:43 +0000 (0:00:54.539) 0:01:04.154 ********** 2026-04-05 07:55:45.349542 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:55:45.349553 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:55:45.349564 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:55:45.349575 | orchestrator | 2026-04-05 07:55:45.349586 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-05 07:55:45.349597 | orchestrator | Sunday 05 April 2026 07:55:43 +0000 (0:00:00.305) 0:01:04.460 ********** 2026-04-05 07:55:45.349607 | orchestrator | skipping: [testbed-node-0] 2026-04-05 07:55:45.349618 | orchestrator | skipping: [testbed-node-1] 2026-04-05 07:55:45.349629 | orchestrator | skipping: [testbed-node-2] 2026-04-05 07:55:45.349640 | orchestrator | 2026-04-05 07:55:45.349651 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:55:45.349663 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-05 07:55:45.349675 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 07:55:45.349704 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-05 07:55:45.349716 | orchestrator | 2026-04-05 07:55:45.349727 | orchestrator | 2026-04-05 07:55:45.349738 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:55:45.349749 | orchestrator | Sunday 05 April 2026 07:55:44 +0000 (0:00:00.980) 0:01:05.441 ********** 2026-04-05 07:55:45.349760 | orchestrator | =============================================================================== 2026-04-05 07:55:45.349779 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 54.54s 2026-04-05 07:55:45.349790 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 5.01s 2026-04-05 07:55:45.349801 | orchestrator | mariadb : include_tasks ------------------------------------------------- 1.18s 2026-04-05 07:55:45.349812 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.98s 2026-04-05 07:55:45.349822 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2026-04-05 07:55:45.349849 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2026-04-05 07:55:45.349878 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-04-05 07:55:45.349889 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2026-04-05 07:55:45.540758 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-05 07:55:45.547408 | orchestrator | + set -e 2026-04-05 07:55:45.547471 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-05 07:55:45.547485 | orchestrator | ++ export INTERACTIVE=false 2026-04-05 07:55:45.547497 | orchestrator | ++ INTERACTIVE=false 2026-04-05 07:55:45.547507 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-05 07:55:45.547517 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-05 07:55:45.547527 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-05 07:55:45.548558 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-05 07:55:45.554749 | orchestrator | 2026-04-05 07:55:45.554776 | orchestrator | # OpenStack endpoints 2026-04-05 07:55:45.554787 | orchestrator | 2026-04-05 07:55:45.554797 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-05 07:55:45.554807 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-05 07:55:45.554817 | orchestrator | + export OS_CLOUD=admin 2026-04-05 07:55:45.554827 | orchestrator | + OS_CLOUD=admin 2026-04-05 07:55:45.554838 | orchestrator | + echo 2026-04-05 07:55:45.554853 | orchestrator | + echo '# OpenStack endpoints' 2026-04-05 07:55:45.554900 | orchestrator | + echo 2026-04-05 07:55:45.554916 | orchestrator | + openstack endpoint list 2026-04-05 07:55:48.650833 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 07:55:48.651003 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-05 07:55:48.651022 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 07:55:48.651052 | orchestrator | | 07b7e6164bc94abfb5d60f8152956714 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-05 07:55:48.651064 | orchestrator | | 0cc86fa669ff483a9bf473cd7990230b | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-05 07:55:48.651075 | orchestrator | | 15be6b3ed327479d8433ee3710783003 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-05 07:55:48.651086 | orchestrator | | 1f501fe843ca40b59d0c9f2764fb8216 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-05 07:55:48.651097 | orchestrator | | 3bd2c56d2eb74f84b51cae6d4ffb391c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-05 07:55:48.651108 | orchestrator | | 46a2c57c38894c8e8363ccfa8225cf7c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-05 07:55:48.651119 | orchestrator | | 49dfaf7fd9aa40a0bb118130e5a3313a | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-05 07:55:48.651158 | orchestrator | | 55d7ed0d8b1d4a07b78cc21067d1e405 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-05 07:55:48.651171 | orchestrator | | 58ed792da7ad4db0a8a26bf8e4fe496d | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-05 07:55:48.651182 | orchestrator | | 5bd814c8eda54e8db61160aa23f99282 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-05 07:55:48.651193 | orchestrator | | 6575a959ef474dcfb35e941f4ffbe453 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-05 07:55:48.651204 | orchestrator | | 6f6e3a9fd2df42848901e745d98dcf21 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 07:55:48.651215 | orchestrator | | 74170d993deb41cbbb56519d64337c31 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-05 07:55:48.651226 | orchestrator | | 80596c54a1994079ac084ae17fd054b1 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-05 07:55:48.651237 | orchestrator | | 8bec892317e84be19a8ec5b6d0991c7f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-05 07:55:48.651248 | orchestrator | | 96e8397f1d064a0cbef5331c2107172b | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-05 07:55:48.651259 | orchestrator | | 99d5072493f84c21814b3c29aba36a1d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 07:55:48.651271 | orchestrator | | a0f545351afe430ebf500758abcef159 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-05 07:55:48.651311 | orchestrator | | a51b5bac1d604ec2ab1ed244112690d3 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-05 07:55:48.651324 | orchestrator | | b12ee1474589431b8bace137775d3c62 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-05 07:55:48.651357 | orchestrator | | bc4ecfa9d29449a897ccec79007e5167 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-05 07:55:48.651371 | orchestrator | | c3d9559bcb3b49769ea704732acd210c | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-05 07:55:48.651386 | orchestrator | | c5cb2d6adc1e47479e61b8b2f12cb10b | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-05 07:55:48.651405 | orchestrator | | ca52d0113823466caf1ff7a2ee1c5b95 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-05 07:55:48.651420 | orchestrator | | cdf4b41e16ff405db61559031c9edcf9 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-05 07:55:48.651434 | orchestrator | | d2eef68c15e947d79b96d624efbe6ce6 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-05 07:55:48.651448 | orchestrator | | de7ce786dd324602be6db256eb948d49 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-05 07:55:48.651462 | orchestrator | | e9a534245e3a4dd08b3722a84b0cc502 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-05 07:55:48.651485 | orchestrator | | f1b7f0a33b6f4906b1baf2d85452abc8 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-05 07:55:48.651500 | orchestrator | | f9396148a41343f592f9a78ffc011d45 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-05 07:55:48.651514 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-05 07:55:48.894650 | orchestrator | 2026-04-05 07:55:48.894765 | orchestrator | # Cinder 2026-04-05 07:55:48.894785 | orchestrator | 2026-04-05 07:55:48.894800 | orchestrator | + echo 2026-04-05 07:55:48.894815 | orchestrator | + echo '# Cinder' 2026-04-05 07:55:48.894830 | orchestrator | + echo 2026-04-05 07:55:48.894845 | orchestrator | + openstack volume service list 2026-04-05 07:55:51.572536 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 07:55:51.572665 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 07:55:51.572688 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 07:55:51.572705 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T07:55:45.000000 | 2026-04-05 07:55:51.572720 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T07:55:46.000000 | 2026-04-05 07:55:51.572736 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T07:55:45.000000 | 2026-04-05 07:55:51.572753 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-05T07:55:44.000000 | 2026-04-05 07:55:51.572769 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-05T07:55:45.000000 | 2026-04-05 07:55:51.572786 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-05T07:55:45.000000 | 2026-04-05 07:55:51.572804 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-05T07:55:50.000000 | 2026-04-05 07:55:51.572820 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-05T07:55:50.000000 | 2026-04-05 07:55:51.572837 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-05T07:55:50.000000 | 2026-04-05 07:55:51.572847 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-05 07:55:51.834215 | orchestrator | 2026-04-05 07:55:51.834299 | orchestrator | # Neutron 2026-04-05 07:55:51.834310 | orchestrator | 2026-04-05 07:55:51.834319 | orchestrator | + echo 2026-04-05 07:55:51.834327 | orchestrator | + echo '# Neutron' 2026-04-05 07:55:51.834336 | orchestrator | + echo 2026-04-05 07:55:51.834343 | orchestrator | + openstack network agent list 2026-04-05 07:55:54.620532 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 07:55:54.620652 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-05 07:55:54.620674 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 07:55:54.620693 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 07:55:54.620710 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-05 07:55:54.620727 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-05 07:55:54.620778 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-05 07:55:54.620795 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-05 07:55:54.620860 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 07:55:54.620933 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-05 07:55:54.620952 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-05 07:55:54.620969 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-05 07:55:54.620985 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-05 07:55:54.883809 | orchestrator | + openstack network service provider list 2026-04-05 07:55:57.462669 | orchestrator | +---------------+------+---------+ 2026-04-05 07:55:57.462796 | orchestrator | | Service Type | Name | Default | 2026-04-05 07:55:57.462811 | orchestrator | +---------------+------+---------+ 2026-04-05 07:55:57.462823 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-05 07:55:57.463575 | orchestrator | +---------------+------+---------+ 2026-04-05 07:55:57.729692 | orchestrator | 2026-04-05 07:55:57.729788 | orchestrator | # Nova 2026-04-05 07:55:57.729802 | orchestrator | 2026-04-05 07:55:57.729814 | orchestrator | + echo 2026-04-05 07:55:57.729826 | orchestrator | + echo '# Nova' 2026-04-05 07:55:57.729838 | orchestrator | + echo 2026-04-05 07:55:57.729849 | orchestrator | + openstack compute service list 2026-04-05 07:56:00.517114 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 07:56:00.517198 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-05 07:56:00.517206 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 07:56:00.517212 | orchestrator | | f775f423-0d69-496e-82f8-2fe2f6571662 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-05T07:55:51.000000 | 2026-04-05 07:56:00.517217 | orchestrator | | b6a43bca-176c-41b8-aa16-1d48be363599 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-05T07:55:50.000000 | 2026-04-05 07:56:00.517223 | orchestrator | | 81060e8f-282f-4e0b-b94e-0cd362086160 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-05T07:55:51.000000 | 2026-04-05 07:56:00.517228 | orchestrator | | 1852e77e-a7f0-49ed-9780-af6d3674f8a5 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-05T07:55:58.000000 | 2026-04-05 07:56:00.517233 | orchestrator | | 91486a54-2e88-4c50-80f5-51c1889fe10a | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-05T07:55:55.000000 | 2026-04-05 07:56:00.517239 | orchestrator | | f8cbd1ee-48a4-48d6-9b6f-afbc2ff571e8 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-05T07:55:53.000000 | 2026-04-05 07:56:00.517244 | orchestrator | | b6bcc8b5-f1ad-4148-825a-ef100b1636e2 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-05T07:55:56.000000 | 2026-04-05 07:56:00.517249 | orchestrator | | 9350d856-2adf-49c7-81fe-2646d0965852 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-05T07:55:53.000000 | 2026-04-05 07:56:00.517254 | orchestrator | | 52112d3f-4aed-4606-9b90-0b3f50064b89 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-05T07:55:54.000000 | 2026-04-05 07:56:00.517260 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-05 07:56:00.804329 | orchestrator | + openstack hypervisor list 2026-04-05 07:56:03.413412 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 07:56:03.413521 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-05 07:56:03.413536 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 07:56:03.413549 | orchestrator | | 52ba7715-3dab-4ac8-af6d-d34d4eeee8c7 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-05 07:56:03.413561 | orchestrator | | 125c80d6-25de-43cd-9687-0e659acf3d20 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-05 07:56:03.413572 | orchestrator | | 09b59826-c511-4ca1-8094-cc59cdf53dd4 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-05 07:56:03.413584 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-05 07:56:03.676959 | orchestrator | 2026-04-05 07:56:03.677039 | orchestrator | # Run OpenStack test play 2026-04-05 07:56:03.677050 | orchestrator | 2026-04-05 07:56:03.677057 | orchestrator | + echo 2026-04-05 07:56:03.677065 | orchestrator | + echo '# Run OpenStack test play' 2026-04-05 07:56:03.677073 | orchestrator | + echo 2026-04-05 07:56:03.677081 | orchestrator | + osism apply --environment openstack test 2026-04-05 07:56:05.009878 | orchestrator | 2026-04-05 07:56:05 | INFO  | Trying to run play test in environment openstack 2026-04-05 07:56:15.175415 | orchestrator | 2026-04-05 07:56:15 | INFO  | Prepare task for execution of test. 2026-04-05 07:56:15.261602 | orchestrator | 2026-04-05 07:56:15 | INFO  | Task 2fc1ccf1-0bf0-4e37-a312-b0f0cb20a2d3 (test) was prepared for execution. 2026-04-05 07:56:15.261712 | orchestrator | 2026-04-05 07:56:15 | INFO  | It takes a moment until task 2fc1ccf1-0bf0-4e37-a312-b0f0cb20a2d3 (test) has been started and output is visible here. 2026-04-05 07:58:52.759488 | orchestrator | 2026-04-05 07:58:52.759629 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-05 07:58:52.759654 | orchestrator | 2026-04-05 07:58:52.759673 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-05 07:58:52.759692 | orchestrator | Sunday 05 April 2026 07:56:20 +0000 (0:00:01.498) 0:00:01.498 ********** 2026-04-05 07:58:52.759708 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.759726 | orchestrator | 2026-04-05 07:58:52.759743 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-05 07:58:52.759760 | orchestrator | Sunday 05 April 2026 07:56:26 +0000 (0:00:05.997) 0:00:07.496 ********** 2026-04-05 07:58:52.759777 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.759795 | orchestrator | 2026-04-05 07:58:52.759811 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-05 07:58:52.759827 | orchestrator | Sunday 05 April 2026 07:56:31 +0000 (0:00:05.066) 0:00:12.563 ********** 2026-04-05 07:58:52.759845 | orchestrator | changed: [localhost] 2026-04-05 07:58:52.759861 | orchestrator | 2026-04-05 07:58:52.759878 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-05 07:58:52.759896 | orchestrator | Sunday 05 April 2026 07:56:40 +0000 (0:00:09.253) 0:00:21.816 ********** 2026-04-05 07:58:52.759912 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.759928 | orchestrator | 2026-04-05 07:58:52.759944 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-05 07:58:52.759961 | orchestrator | Sunday 05 April 2026 07:56:45 +0000 (0:00:05.138) 0:00:26.954 ********** 2026-04-05 07:58:52.759978 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.759995 | orchestrator | 2026-04-05 07:58:52.760012 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-05 07:58:52.760029 | orchestrator | Sunday 05 April 2026 07:56:50 +0000 (0:00:05.047) 0:00:32.002 ********** 2026-04-05 07:58:52.760046 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-05 07:58:52.760064 | orchestrator | ok: [localhost] => (item=member) 2026-04-05 07:58:52.760113 | orchestrator | changed: [localhost] => (item=creator) 2026-04-05 07:58:52.760132 | orchestrator | 2026-04-05 07:58:52.760148 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-05 07:58:52.760192 | orchestrator | Sunday 05 April 2026 07:57:04 +0000 (0:00:13.465) 0:00:45.467 ********** 2026-04-05 07:58:52.760210 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.760226 | orchestrator | 2026-04-05 07:58:52.760242 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-05 07:58:52.760252 | orchestrator | Sunday 05 April 2026 07:57:09 +0000 (0:00:05.462) 0:00:50.930 ********** 2026-04-05 07:58:52.760262 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.760271 | orchestrator | 2026-04-05 07:58:52.760281 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-05 07:58:52.760292 | orchestrator | Sunday 05 April 2026 07:57:14 +0000 (0:00:05.132) 0:00:56.062 ********** 2026-04-05 07:58:52.760301 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.760311 | orchestrator | 2026-04-05 07:58:52.760321 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-05 07:58:52.760330 | orchestrator | Sunday 05 April 2026 07:57:20 +0000 (0:00:05.252) 0:01:01.315 ********** 2026-04-05 07:58:52.760340 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.760350 | orchestrator | 2026-04-05 07:58:52.760359 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-05 07:58:52.760369 | orchestrator | Sunday 05 April 2026 07:57:24 +0000 (0:00:04.870) 0:01:06.186 ********** 2026-04-05 07:58:52.760378 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.760395 | orchestrator | 2026-04-05 07:58:52.760411 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-05 07:58:52.760428 | orchestrator | Sunday 05 April 2026 07:57:29 +0000 (0:00:05.044) 0:01:11.231 ********** 2026-04-05 07:58:52.760444 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.760460 | orchestrator | 2026-04-05 07:58:52.760475 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-05 07:58:52.760491 | orchestrator | Sunday 05 April 2026 07:57:34 +0000 (0:00:04.937) 0:01:16.168 ********** 2026-04-05 07:58:52.760507 | orchestrator | ok: [localhost] => (item={'name': 'test-1'}) 2026-04-05 07:58:52.760525 | orchestrator | ok: [localhost] => (item={'name': 'test-2'}) 2026-04-05 07:58:52.760541 | orchestrator | ok: [localhost] => (item={'name': 'test-3'}) 2026-04-05 07:58:52.760558 | orchestrator | 2026-04-05 07:58:52.760572 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-05 07:58:52.760582 | orchestrator | Sunday 05 April 2026 07:57:47 +0000 (0:00:12.828) 0:01:28.997 ********** 2026-04-05 07:58:52.760591 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-05 07:58:52.760602 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-05 07:58:52.760611 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-05 07:58:52.760621 | orchestrator | 2026-04-05 07:58:52.760631 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-05 07:58:52.760640 | orchestrator | Sunday 05 April 2026 07:58:00 +0000 (0:00:12.970) 0:01:41.968 ********** 2026-04-05 07:58:52.760650 | orchestrator | ok: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-05 07:58:52.760660 | orchestrator | ok: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-05 07:58:52.760670 | orchestrator | ok: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-05 07:58:52.760680 | orchestrator | 2026-04-05 07:58:52.760690 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-05 07:58:52.760699 | orchestrator | 2026-04-05 07:58:52.760709 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-05 07:58:52.760718 | orchestrator | Sunday 05 April 2026 07:58:15 +0000 (0:00:14.513) 0:01:56.481 ********** 2026-04-05 07:58:52.760752 | orchestrator | ok: [localhost] 2026-04-05 07:58:52.760762 | orchestrator | 2026-04-05 07:58:52.760792 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-05 07:58:52.760802 | orchestrator | Sunday 05 April 2026 07:58:20 +0000 (0:00:04.948) 0:02:01.429 ********** 2026-04-05 07:58:52.760812 | orchestrator | skipping: [localhost] 2026-04-05 07:58:52.760822 | orchestrator | 2026-04-05 07:58:52.760832 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-05 07:58:52.760841 | orchestrator | Sunday 05 April 2026 07:58:21 +0000 (0:00:01.148) 0:02:02.578 ********** 2026-04-05 07:58:52.760851 | orchestrator | skipping: [localhost] 2026-04-05 07:58:52.760861 | orchestrator | 2026-04-05 07:58:52.760870 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-05 07:58:52.760879 | orchestrator | Sunday 05 April 2026 07:58:22 +0000 (0:00:01.109) 0:02:03.688 ********** 2026-04-05 07:58:52.760889 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-05 07:58:52.760898 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-05 07:58:52.760908 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-05 07:58:52.760917 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-05 07:58:52.760927 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-05 07:58:52.760936 | orchestrator | skipping: [localhost] 2026-04-05 07:58:52.760946 | orchestrator | 2026-04-05 07:58:52.760955 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-05 07:58:52.760965 | orchestrator | Sunday 05 April 2026 07:58:23 +0000 (0:00:01.276) 0:02:04.965 ********** 2026-04-05 07:58:52.760974 | orchestrator | skipping: [localhost] 2026-04-05 07:58:52.760984 | orchestrator | 2026-04-05 07:58:52.760993 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-05 07:58:52.761003 | orchestrator | Sunday 05 April 2026 07:58:24 +0000 (0:00:01.267) 0:02:06.232 ********** 2026-04-05 07:58:52.761012 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 07:58:52.761021 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 07:58:52.761031 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 07:58:52.761040 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 07:58:52.761050 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 07:58:52.761059 | orchestrator | 2026-04-05 07:58:52.761069 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-05 07:58:52.761079 | orchestrator | Sunday 05 April 2026 07:58:31 +0000 (0:00:06.014) 0:02:12.246 ********** 2026-04-05 07:58:52.761088 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-05 07:58:52.761101 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j689086522471.4222', 'results_file': '/ansible/.ansible_async/j689086522471.4222', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:58:52.761113 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j976454984629.4247', 'results_file': '/ansible/.ansible_async/j976454984629.4247', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:58:52.761123 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j469103109590.4272', 'results_file': '/ansible/.ansible_async/j469103109590.4272', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:58:52.761133 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j421701049889.4297', 'results_file': '/ansible/.ansible_async/j421701049889.4297', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:58:52.761150 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j649467139046.4329', 'results_file': '/ansible/.ansible_async/j649467139046.4329', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:58:52.761159 | orchestrator | 2026-04-05 07:58:52.761192 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-05 07:58:52.761202 | orchestrator | Sunday 05 April 2026 07:58:46 +0000 (0:00:15.971) 0:02:28.218 ********** 2026-04-05 07:58:52.761212 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 07:58:52.761222 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 07:58:52.761232 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 07:58:52.761242 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 07:58:52.761251 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 07:58:52.761261 | orchestrator | 2026-04-05 07:58:52.761271 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-05 07:58:52.761286 | orchestrator | Sunday 05 April 2026 07:58:52 +0000 (0:00:05.782) 0:02:34.000 ********** 2026-04-05 07:59:53.472821 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j149676831744.4400', 'results_file': '/ansible/.ansible_async/j149676831744.4400', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.472931 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j758012945126.4425', 'results_file': '/ansible/.ansible_async/j758012945126.4425', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.472947 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j728540918516.4450', 'results_file': '/ansible/.ansible_async/j728540918516.4450', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.472959 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j588537104020.4475', 'results_file': '/ansible/.ansible_async/j588537104020.4475', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.472971 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j937239436025.4500', 'results_file': '/ansible/.ansible_async/j937239436025.4500', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.472983 | orchestrator | 2026-04-05 07:59:53.472996 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-05 07:59:53.473008 | orchestrator | Sunday 05 April 2026 07:58:57 +0000 (0:00:04.749) 0:02:38.750 ********** 2026-04-05 07:59:53.473019 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 07:59:53.473030 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 07:59:53.473040 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 07:59:53.473051 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 07:59:53.473062 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 07:59:53.473073 | orchestrator | 2026-04-05 07:59:53.473084 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-05 07:59:53.473095 | orchestrator | Sunday 05 April 2026 07:59:03 +0000 (0:00:05.867) 0:02:44.617 ********** 2026-04-05 07:59:53.473131 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-05 07:59:53.473143 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j700319359272.4564', 'results_file': '/ansible/.ansible_async/j700319359272.4564', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.473170 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j463756107906.4589', 'results_file': '/ansible/.ansible_async/j463756107906.4589', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.473182 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j849151787441.4623', 'results_file': '/ansible/.ansible_async/j849151787441.4623', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.473193 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j340327286256.4649', 'results_file': '/ansible/.ansible_async/j340327286256.4649', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.473204 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j400519354233.4675', 'results_file': '/ansible/.ansible_async/j400519354233.4675', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-05 07:59:53.473215 | orchestrator | 2026-04-05 07:59:53.473227 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-05 07:59:53.473238 | orchestrator | Sunday 05 April 2026 07:59:14 +0000 (0:00:11.375) 0:02:55.992 ********** 2026-04-05 07:59:53.473296 | orchestrator | ok: [localhost] 2026-04-05 07:59:53.473311 | orchestrator | 2026-04-05 07:59:53.473322 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-05 07:59:53.473334 | orchestrator | Sunday 05 April 2026 07:59:19 +0000 (0:00:05.174) 0:03:01.167 ********** 2026-04-05 07:59:53.473346 | orchestrator | ok: [localhost] 2026-04-05 07:59:53.473363 | orchestrator | 2026-04-05 07:59:53.473377 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-05 07:59:53.473409 | orchestrator | Sunday 05 April 2026 07:59:25 +0000 (0:00:05.986) 0:03:07.154 ********** 2026-04-05 07:59:53.473422 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-05 07:59:53.473435 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-05 07:59:53.473448 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-05 07:59:53.473461 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-05 07:59:53.473474 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-05 07:59:53.473486 | orchestrator | 2026-04-05 07:59:53.473499 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-05 07:59:53.473511 | orchestrator | Sunday 05 April 2026 07:59:51 +0000 (0:00:25.745) 0:03:32.899 ********** 2026-04-05 07:59:53.473524 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-05 07:59:53.473538 | orchestrator |  "msg": "test: 192.168.112.191" 2026-04-05 07:59:53.473549 | orchestrator | } 2026-04-05 07:59:53.473560 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-05 07:59:53.473571 | orchestrator |  "msg": "test-1: 192.168.112.105" 2026-04-05 07:59:53.473582 | orchestrator | } 2026-04-05 07:59:53.473593 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-05 07:59:53.473604 | orchestrator |  "msg": "test-2: 192.168.112.170" 2026-04-05 07:59:53.473614 | orchestrator | } 2026-04-05 07:59:53.473625 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-05 07:59:53.473636 | orchestrator |  "msg": "test-3: 192.168.112.137" 2026-04-05 07:59:53.473657 | orchestrator | } 2026-04-05 07:59:53.473668 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-05 07:59:53.473679 | orchestrator |  "msg": "test-4: 192.168.112.180" 2026-04-05 07:59:53.473689 | orchestrator | } 2026-04-05 07:59:53.473700 | orchestrator | 2026-04-05 07:59:53.473711 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-05 07:59:53.473723 | orchestrator | localhost : ok=26  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-05 07:59:53.473736 | orchestrator | 2026-04-05 07:59:53.473747 | orchestrator | 2026-04-05 07:59:53.473758 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-05 07:59:53.473769 | orchestrator | Sunday 05 April 2026 07:59:53 +0000 (0:00:01.574) 0:03:34.474 ********** 2026-04-05 07:59:53.473780 | orchestrator | =============================================================================== 2026-04-05 07:59:53.473790 | orchestrator | Create floating ip addresses ------------------------------------------- 25.75s 2026-04-05 07:59:53.473801 | orchestrator | Wait for instance creation to complete --------------------------------- 15.97s 2026-04-05 07:59:53.473812 | orchestrator | Create test routers ---------------------------------------------------- 14.51s 2026-04-05 07:59:53.473823 | orchestrator | Add member roles to user test ------------------------------------------ 13.46s 2026-04-05 07:59:53.473834 | orchestrator | Create test subnets ---------------------------------------------------- 12.97s 2026-04-05 07:59:53.473844 | orchestrator | Create test networks --------------------------------------------------- 12.83s 2026-04-05 07:59:53.473855 | orchestrator | Wait for tags to be added ---------------------------------------------- 11.38s 2026-04-05 07:59:53.473866 | orchestrator | Add manager role to user test-admin ------------------------------------- 9.25s 2026-04-05 07:59:53.473877 | orchestrator | Create test instances --------------------------------------------------- 6.01s 2026-04-05 07:59:53.473887 | orchestrator | Create test domain ------------------------------------------------------ 6.00s 2026-04-05 07:59:53.473898 | orchestrator | Attach test volume ------------------------------------------------------ 5.99s 2026-04-05 07:59:53.473909 | orchestrator | Add tag to instances ---------------------------------------------------- 5.87s 2026-04-05 07:59:53.473920 | orchestrator | Add metadata to instances ----------------------------------------------- 5.78s 2026-04-05 07:59:53.473930 | orchestrator | Create test server group ------------------------------------------------ 5.46s 2026-04-05 07:59:53.473941 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.25s 2026-04-05 07:59:53.473952 | orchestrator | Create test volume ------------------------------------------------------ 5.17s 2026-04-05 07:59:53.473963 | orchestrator | Create test project ----------------------------------------------------- 5.14s 2026-04-05 07:59:53.473973 | orchestrator | Create ssh security group ----------------------------------------------- 5.13s 2026-04-05 07:59:53.473984 | orchestrator | Create test-admin user -------------------------------------------------- 5.07s 2026-04-05 07:59:53.473995 | orchestrator | Create test user -------------------------------------------------------- 5.05s 2026-04-05 07:59:53.666401 | orchestrator | + server_list 2026-04-05 07:59:53.666498 | orchestrator | + openstack --os-cloud test server list 2026-04-05 07:59:57.322635 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 07:59:57.322785 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-05 07:59:57.322813 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 07:59:57.322834 | orchestrator | | 0b1222a5-f3cb-41ab-92be-469ce301e9ad | test-3 | ACTIVE | test-2=192.168.112.137, 192.168.201.118 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 07:59:57.322853 | orchestrator | | ecfbad1d-1672-44a0-9b9d-e8da7a8a2f92 | test-4 | ACTIVE | test-3=192.168.112.180, 192.168.202.73 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 07:59:57.322929 | orchestrator | | 0a2bbed2-8c42-43cd-8046-2895ded493c5 | test-2 | ACTIVE | test-2=192.168.112.170, 192.168.201.216 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 07:59:57.322954 | orchestrator | | 097c8630-3aa1-452f-8464-e68c61053ff7 | test | ACTIVE | test-1=192.168.112.191, 192.168.200.66 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 07:59:57.322975 | orchestrator | | aa0a3eb0-7fe8-4d0b-b428-c7baecf5448f | test-1 | ACTIVE | test-1=192.168.112.105, 192.168.200.251 | N/A (booted from volume) | SCS-1L-1 | 2026-04-05 07:59:57.322994 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-05 07:59:57.581595 | orchestrator | + openstack --os-cloud test server show test 2026-04-05 08:00:00.865404 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:00.865515 | orchestrator | | Field | Value | 2026-04-05 08:00:00.865530 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:00.865541 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 08:00:00.865551 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 08:00:00.865561 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 08:00:00.865571 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-05 08:00:00.865582 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 08:00:00.865617 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 08:00:00.865644 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 08:00:00.865656 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 08:00:00.865666 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 08:00:00.865676 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 08:00:00.865687 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 08:00:00.865697 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 08:00:00.865707 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 08:00:00.865718 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 08:00:00.865734 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 08:00:00.865748 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:08.000000 | 2026-04-05 08:00:00.865766 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 08:00:00.865777 | orchestrator | | accessIPv4 | | 2026-04-05 08:00:00.865787 | orchestrator | | accessIPv6 | | 2026-04-05 08:00:00.865798 | orchestrator | | addresses | test-1=192.168.112.191, 192.168.200.66 | 2026-04-05 08:00:00.865808 | orchestrator | | config_drive | | 2026-04-05 08:00:00.865818 | orchestrator | | created | 2026-04-05T04:24:41Z | 2026-04-05 08:00:00.865829 | orchestrator | | description | None | 2026-04-05 08:00:00.865844 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 08:00:00.865854 | orchestrator | | hostId | 51219fb0b745a037ca58054f81e361499c4bce5c336f51de418af919 | 2026-04-05 08:00:00.865871 | orchestrator | | host_status | None | 2026-04-05 08:00:00.865890 | orchestrator | | id | 097c8630-3aa1-452f-8464-e68c61053ff7 | 2026-04-05 08:00:00.865902 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 08:00:00.865916 | orchestrator | | key_name | test | 2026-04-05 08:00:00.865928 | orchestrator | | locked | False | 2026-04-05 08:00:00.865940 | orchestrator | | locked_reason | None | 2026-04-05 08:00:00.865952 | orchestrator | | name | test | 2026-04-05 08:00:00.865965 | orchestrator | | pinned_availability_zone | None | 2026-04-05 08:00:00.865983 | orchestrator | | progress | 0 | 2026-04-05 08:00:00.865996 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 08:00:00.866012 | orchestrator | | properties | hostname='test' | 2026-04-05 08:00:00.866095 | orchestrator | | security_groups | name='icmp' | 2026-04-05 08:00:00.866109 | orchestrator | | | name='ssh' | 2026-04-05 08:00:00.866121 | orchestrator | | server_groups | None | 2026-04-05 08:00:00.866135 | orchestrator | | status | ACTIVE | 2026-04-05 08:00:00.866147 | orchestrator | | tags | test | 2026-04-05 08:00:00.866160 | orchestrator | | trusted_image_certificates | None | 2026-04-05 08:00:00.866178 | orchestrator | | updated | 2026-04-05T07:58:53Z | 2026-04-05 08:00:00.866190 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 08:00:00.866202 | orchestrator | | volumes_attached | delete_on_termination='True', id='01b028e0-fdbe-4ff7-8601-c3cc2494aae0' | 2026-04-05 08:00:00.866215 | orchestrator | | | delete_on_termination='False', id='f2d72a2c-7635-430f-aad8-fdc622687c0d' | 2026-04-05 08:00:00.868578 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:01.152306 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-05 08:00:04.163182 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:04.163351 | orchestrator | | Field | Value | 2026-04-05 08:00:04.163787 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:04.163809 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 08:00:04.163846 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 08:00:04.163865 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 08:00:04.163880 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-05 08:00:04.163894 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 08:00:04.163908 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 08:00:04.163944 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 08:00:04.163960 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 08:00:04.163974 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 08:00:04.163989 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 08:00:04.164009 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 08:00:04.164026 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 08:00:04.164038 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 08:00:04.164049 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 08:00:04.164061 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 08:00:04.164073 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:10.000000 | 2026-04-05 08:00:04.164093 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 08:00:04.164105 | orchestrator | | accessIPv4 | | 2026-04-05 08:00:04.164116 | orchestrator | | accessIPv6 | | 2026-04-05 08:00:04.164127 | orchestrator | | addresses | test-1=192.168.112.105, 192.168.200.251 | 2026-04-05 08:00:04.164149 | orchestrator | | config_drive | | 2026-04-05 08:00:04.164165 | orchestrator | | created | 2026-04-05T04:24:41Z | 2026-04-05 08:00:04.164177 | orchestrator | | description | None | 2026-04-05 08:00:04.164188 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 08:00:04.164199 | orchestrator | | hostId | 51219fb0b745a037ca58054f81e361499c4bce5c336f51de418af919 | 2026-04-05 08:00:04.164211 | orchestrator | | host_status | None | 2026-04-05 08:00:04.164230 | orchestrator | | id | aa0a3eb0-7fe8-4d0b-b428-c7baecf5448f | 2026-04-05 08:00:04.164242 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 08:00:04.164254 | orchestrator | | key_name | test | 2026-04-05 08:00:04.164321 | orchestrator | | locked | False | 2026-04-05 08:00:04.164334 | orchestrator | | locked_reason | None | 2026-04-05 08:00:04.164350 | orchestrator | | name | test-1 | 2026-04-05 08:00:04.164361 | orchestrator | | pinned_availability_zone | None | 2026-04-05 08:00:04.164372 | orchestrator | | progress | 0 | 2026-04-05 08:00:04.164384 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 08:00:04.164395 | orchestrator | | properties | hostname='test-1' | 2026-04-05 08:00:04.164415 | orchestrator | | security_groups | name='icmp' | 2026-04-05 08:00:04.164427 | orchestrator | | | name='ssh' | 2026-04-05 08:00:04.164446 | orchestrator | | server_groups | None | 2026-04-05 08:00:04.164457 | orchestrator | | status | ACTIVE | 2026-04-05 08:00:04.164468 | orchestrator | | tags | test | 2026-04-05 08:00:04.164484 | orchestrator | | trusted_image_certificates | None | 2026-04-05 08:00:04.164496 | orchestrator | | updated | 2026-04-05T07:58:53Z | 2026-04-05 08:00:04.164507 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 08:00:04.164519 | orchestrator | | volumes_attached | delete_on_termination='True', id='3620145d-6e82-46dc-a90f-d44818509785' | 2026-04-05 08:00:04.167409 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:04.412082 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-05 08:00:07.449061 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:07.449166 | orchestrator | | Field | Value | 2026-04-05 08:00:07.449181 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:07.449194 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 08:00:07.449206 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 08:00:07.449224 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 08:00:07.449236 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-05 08:00:07.449248 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 08:00:07.449259 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 08:00:07.449323 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 08:00:07.449344 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 08:00:07.449355 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 08:00:07.449367 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 08:00:07.449378 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 08:00:07.449390 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 08:00:07.449406 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 08:00:07.449418 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 08:00:07.449429 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 08:00:07.449440 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:09.000000 | 2026-04-05 08:00:07.449458 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 08:00:07.449477 | orchestrator | | accessIPv4 | | 2026-04-05 08:00:07.449488 | orchestrator | | accessIPv6 | | 2026-04-05 08:00:07.449500 | orchestrator | | addresses | test-2=192.168.112.170, 192.168.201.216 | 2026-04-05 08:00:07.449511 | orchestrator | | config_drive | | 2026-04-05 08:00:07.449523 | orchestrator | | created | 2026-04-05T04:24:42Z | 2026-04-05 08:00:07.449534 | orchestrator | | description | None | 2026-04-05 08:00:07.449545 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 08:00:07.449557 | orchestrator | | hostId | 85390d0c9d860f2e0b5bcc02028e5f686a22dac1f2f23e84108d4135 | 2026-04-05 08:00:07.449568 | orchestrator | | host_status | None | 2026-04-05 08:00:07.449638 | orchestrator | | id | 0a2bbed2-8c42-43cd-8046-2895ded493c5 | 2026-04-05 08:00:07.449660 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 08:00:07.449674 | orchestrator | | key_name | test | 2026-04-05 08:00:07.449688 | orchestrator | | locked | False | 2026-04-05 08:00:07.449702 | orchestrator | | locked_reason | None | 2026-04-05 08:00:07.449716 | orchestrator | | name | test-2 | 2026-04-05 08:00:07.449734 | orchestrator | | pinned_availability_zone | None | 2026-04-05 08:00:07.449748 | orchestrator | | progress | 0 | 2026-04-05 08:00:07.449762 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 08:00:07.449781 | orchestrator | | properties | hostname='test-2' | 2026-04-05 08:00:07.449802 | orchestrator | | security_groups | name='icmp' | 2026-04-05 08:00:07.449818 | orchestrator | | | name='ssh' | 2026-04-05 08:00:07.449832 | orchestrator | | server_groups | None | 2026-04-05 08:00:07.449846 | orchestrator | | status | ACTIVE | 2026-04-05 08:00:07.449859 | orchestrator | | tags | test | 2026-04-05 08:00:07.449878 | orchestrator | | trusted_image_certificates | None | 2026-04-05 08:00:07.449892 | orchestrator | | updated | 2026-04-05T07:58:54Z | 2026-04-05 08:00:07.449905 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 08:00:07.449924 | orchestrator | | volumes_attached | delete_on_termination='True', id='7e5114bf-2f9d-4068-9186-5c9933f3258c' | 2026-04-05 08:00:07.451535 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:07.616404 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-05 08:00:10.319019 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:10.319111 | orchestrator | | Field | Value | 2026-04-05 08:00:10.319127 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:10.319139 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 08:00:10.319151 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 08:00:10.319178 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 08:00:10.319190 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-05 08:00:10.319246 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 08:00:10.319258 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 08:00:10.319343 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 08:00:10.319357 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 08:00:10.319369 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 08:00:10.319381 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 08:00:10.319393 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 08:00:10.319404 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 08:00:10.319421 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 08:00:10.319433 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 08:00:10.319453 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 08:00:10.319464 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:09.000000 | 2026-04-05 08:00:10.319483 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 08:00:10.319495 | orchestrator | | accessIPv4 | | 2026-04-05 08:00:10.319507 | orchestrator | | accessIPv6 | | 2026-04-05 08:00:10.319518 | orchestrator | | addresses | test-2=192.168.112.137, 192.168.201.118 | 2026-04-05 08:00:10.319530 | orchestrator | | config_drive | | 2026-04-05 08:00:10.319542 | orchestrator | | created | 2026-04-05T04:24:46Z | 2026-04-05 08:00:10.319558 | orchestrator | | description | None | 2026-04-05 08:00:10.319576 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 08:00:10.319588 | orchestrator | | hostId | 85390d0c9d860f2e0b5bcc02028e5f686a22dac1f2f23e84108d4135 | 2026-04-05 08:00:10.319599 | orchestrator | | host_status | None | 2026-04-05 08:00:10.319617 | orchestrator | | id | 0b1222a5-f3cb-41ab-92be-469ce301e9ad | 2026-04-05 08:00:10.319630 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 08:00:10.319641 | orchestrator | | key_name | test | 2026-04-05 08:00:10.319653 | orchestrator | | locked | False | 2026-04-05 08:00:10.319664 | orchestrator | | locked_reason | None | 2026-04-05 08:00:10.319676 | orchestrator | | name | test-3 | 2026-04-05 08:00:10.319698 | orchestrator | | pinned_availability_zone | None | 2026-04-05 08:00:10.319710 | orchestrator | | progress | 0 | 2026-04-05 08:00:10.319721 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 08:00:10.319733 | orchestrator | | properties | hostname='test-3' | 2026-04-05 08:00:10.319751 | orchestrator | | security_groups | name='icmp' | 2026-04-05 08:00:10.319763 | orchestrator | | | name='ssh' | 2026-04-05 08:00:10.319775 | orchestrator | | server_groups | None | 2026-04-05 08:00:10.319786 | orchestrator | | status | ACTIVE | 2026-04-05 08:00:10.319798 | orchestrator | | tags | test | 2026-04-05 08:00:10.319816 | orchestrator | | trusted_image_certificates | None | 2026-04-05 08:00:10.319838 | orchestrator | | updated | 2026-04-05T07:58:55Z | 2026-04-05 08:00:10.319850 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 08:00:10.319862 | orchestrator | | volumes_attached | delete_on_termination='True', id='2eb5d65b-8149-469b-ac2c-861b74b65c25' | 2026-04-05 08:00:10.324958 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:10.498010 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-05 08:00:13.133005 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:13.133095 | orchestrator | | Field | Value | 2026-04-05 08:00:13.133111 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:13.133124 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-05 08:00:13.133157 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-05 08:00:13.133169 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-05 08:00:13.133193 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-05 08:00:13.133204 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-05 08:00:13.133216 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-05 08:00:13.133243 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-05 08:00:13.133256 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-05 08:00:13.133267 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-05 08:00:13.133336 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-05 08:00:13.133347 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-05 08:00:13.133366 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-05 08:00:13.133378 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-05 08:00:13.133418 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-05 08:00:13.133431 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-05 08:00:13.133443 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-05T04:25:11.000000 | 2026-04-05 08:00:13.133463 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-05 08:00:13.133475 | orchestrator | | accessIPv4 | | 2026-04-05 08:00:13.133486 | orchestrator | | accessIPv6 | | 2026-04-05 08:00:13.133498 | orchestrator | | addresses | test-3=192.168.112.180, 192.168.202.73 | 2026-04-05 08:00:13.133516 | orchestrator | | config_drive | | 2026-04-05 08:00:13.133528 | orchestrator | | created | 2026-04-05T04:24:44Z | 2026-04-05 08:00:13.133543 | orchestrator | | description | None | 2026-04-05 08:00:13.133555 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-05 08:00:13.133569 | orchestrator | | hostId | 85390d0c9d860f2e0b5bcc02028e5f686a22dac1f2f23e84108d4135 | 2026-04-05 08:00:13.133582 | orchestrator | | host_status | None | 2026-04-05 08:00:13.133603 | orchestrator | | id | ecfbad1d-1672-44a0-9b9d-e8da7a8a2f92 | 2026-04-05 08:00:13.133616 | orchestrator | | image | N/A (booted from volume) | 2026-04-05 08:00:13.133630 | orchestrator | | key_name | test | 2026-04-05 08:00:13.133649 | orchestrator | | locked | False | 2026-04-05 08:00:13.133663 | orchestrator | | locked_reason | None | 2026-04-05 08:00:13.133677 | orchestrator | | name | test-4 | 2026-04-05 08:00:13.133695 | orchestrator | | pinned_availability_zone | None | 2026-04-05 08:00:13.133710 | orchestrator | | progress | 0 | 2026-04-05 08:00:13.133723 | orchestrator | | project_id | 2006ba074aee409caf78398cac87b091 | 2026-04-05 08:00:13.133736 | orchestrator | | properties | hostname='test-4' | 2026-04-05 08:00:13.133755 | orchestrator | | security_groups | name='icmp' | 2026-04-05 08:00:13.133769 | orchestrator | | | name='ssh' | 2026-04-05 08:00:13.133789 | orchestrator | | server_groups | None | 2026-04-05 08:00:13.133802 | orchestrator | | status | ACTIVE | 2026-04-05 08:00:13.133816 | orchestrator | | tags | test | 2026-04-05 08:00:13.133830 | orchestrator | | trusted_image_certificates | None | 2026-04-05 08:00:13.133845 | orchestrator | | updated | 2026-04-05T07:58:55Z | 2026-04-05 08:00:13.133857 | orchestrator | | user_id | ae428c99ae2e480bafa6250d1dfd1056 | 2026-04-05 08:00:13.133868 | orchestrator | | volumes_attached | delete_on_termination='True', id='9325f864-f7d6-4498-ba97-70dbfcc5f82f' | 2026-04-05 08:00:13.135776 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-05 08:00:13.301880 | orchestrator | + server_ping 2026-04-05 08:00:13.302544 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-05 08:00:13.303445 | orchestrator | ++ tr -d '\r' 2026-04-05 08:00:16.002773 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 08:00:16.003170 | orchestrator | + ping -c3 192.168.112.105 2026-04-05 08:00:16.021132 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2026-04-05 08:00:16.021230 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=7.76 ms 2026-04-05 08:00:17.016980 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.08 ms 2026-04-05 08:00:18.017940 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.85 ms 2026-04-05 08:00:18.018059 | orchestrator | 2026-04-05 08:00:18.018071 | orchestrator | --- 192.168.112.105 ping statistics --- 2026-04-05 08:00:18.018080 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 08:00:18.018087 | orchestrator | rtt min/avg/max/mdev = 1.846/3.893/7.760/2.735 ms 2026-04-05 08:00:18.018654 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 08:00:18.018670 | orchestrator | + ping -c3 192.168.112.137 2026-04-05 08:00:18.030858 | orchestrator | PING 192.168.112.137 (192.168.112.137) 56(84) bytes of data. 2026-04-05 08:00:18.030934 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=1 ttl=63 time=6.73 ms 2026-04-05 08:00:19.028819 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=2 ttl=63 time=2.40 ms 2026-04-05 08:00:20.029778 | orchestrator | 64 bytes from 192.168.112.137: icmp_seq=3 ttl=63 time=1.99 ms 2026-04-05 08:00:20.029888 | orchestrator | 2026-04-05 08:00:20.029914 | orchestrator | --- 192.168.112.137 ping statistics --- 2026-04-05 08:00:20.029935 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 08:00:20.029954 | orchestrator | rtt min/avg/max/mdev = 1.990/3.705/6.731/2.145 ms 2026-04-05 08:00:20.029975 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 08:00:20.029994 | orchestrator | + ping -c3 192.168.112.170 2026-04-05 08:00:20.039598 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2026-04-05 08:00:20.039675 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=6.42 ms 2026-04-05 08:00:21.037211 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.28 ms 2026-04-05 08:00:22.037964 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=1.47 ms 2026-04-05 08:00:22.038125 | orchestrator | 2026-04-05 08:00:22.038145 | orchestrator | --- 192.168.112.170 ping statistics --- 2026-04-05 08:00:22.038158 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-05 08:00:22.038170 | orchestrator | rtt min/avg/max/mdev = 1.471/3.391/6.418/2.165 ms 2026-04-05 08:00:22.038917 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 08:00:22.038954 | orchestrator | + ping -c3 192.168.112.191 2026-04-05 08:00:22.048631 | orchestrator | PING 192.168.112.191 (192.168.112.191) 56(84) bytes of data. 2026-04-05 08:00:22.048723 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=1 ttl=63 time=5.36 ms 2026-04-05 08:00:23.046332 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=2 ttl=63 time=2.17 ms 2026-04-05 08:00:24.047537 | orchestrator | 64 bytes from 192.168.112.191: icmp_seq=3 ttl=63 time=1.73 ms 2026-04-05 08:00:24.047788 | orchestrator | 2026-04-05 08:00:24.047817 | orchestrator | --- 192.168.112.191 ping statistics --- 2026-04-05 08:00:24.047830 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-05 08:00:24.047842 | orchestrator | rtt min/avg/max/mdev = 1.728/3.086/5.360/1.617 ms 2026-04-05 08:00:24.047867 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-05 08:00:24.047879 | orchestrator | + ping -c3 192.168.112.180 2026-04-05 08:00:24.064160 | orchestrator | PING 192.168.112.180 (192.168.112.180) 56(84) bytes of data. 2026-04-05 08:00:24.064256 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=1 ttl=63 time=11.7 ms 2026-04-05 08:00:25.055438 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=2 ttl=63 time=2.13 ms 2026-04-05 08:00:26.054814 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=3 ttl=63 time=1.37 ms 2026-04-05 08:00:26.054912 | orchestrator | 2026-04-05 08:00:26.054929 | orchestrator | --- 192.168.112.180 ping statistics --- 2026-04-05 08:00:26.054942 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-05 08:00:26.054954 | orchestrator | rtt min/avg/max/mdev = 1.371/5.053/11.656/4.678 ms 2026-04-05 08:00:26.055574 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-05 08:00:26.440106 | orchestrator | ok: Runtime: 0:10:39.306246 2026-04-05 08:00:26.485464 | 2026-04-05 08:00:26.485585 | PLAY RECAP 2026-04-05 08:00:26.485649 | orchestrator | ok: 32 changed: 13 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-05 08:00:26.485676 | 2026-04-05 08:00:26.811464 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-04-05 08:00:26.813660 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 08:00:28.568346 | 2026-04-05 08:00:28.568534 | PLAY [Post output play] 2026-04-05 08:00:28.605475 | 2026-04-05 08:00:28.605713 | LOOP [stage-output : Register sources] 2026-04-05 08:00:28.662459 | 2026-04-05 08:00:28.662704 | TASK [stage-output : Check sudo] 2026-04-05 08:00:29.563228 | orchestrator | sudo: a password is required 2026-04-05 08:00:29.700226 | orchestrator | ok: Runtime: 0:00:00.017108 2026-04-05 08:00:29.715955 | 2026-04-05 08:00:29.716133 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-05 08:00:29.755886 | 2026-04-05 08:00:29.756201 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-05 08:00:29.825338 | orchestrator | ok 2026-04-05 08:00:29.835225 | 2026-04-05 08:00:29.835375 | LOOP [stage-output : Ensure target folders exist] 2026-04-05 08:00:30.300558 | orchestrator | ok: "docs" 2026-04-05 08:00:30.301072 | 2026-04-05 08:00:30.557614 | orchestrator | ok: "artifacts" 2026-04-05 08:00:30.801595 | orchestrator | ok: "logs" 2026-04-05 08:00:30.826202 | 2026-04-05 08:00:30.826397 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-05 08:00:30.867178 | 2026-04-05 08:00:30.867481 | TASK [stage-output : Make all log files readable] 2026-04-05 08:00:31.160333 | orchestrator | ok 2026-04-05 08:00:31.169535 | 2026-04-05 08:00:31.169675 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-05 08:00:31.216356 | orchestrator | skipping: Conditional result was False 2026-04-05 08:00:31.232456 | 2026-04-05 08:00:31.232649 | TASK [stage-output : Discover log files for compression] 2026-04-05 08:00:31.258658 | orchestrator | skipping: Conditional result was False 2026-04-05 08:00:31.274868 | 2026-04-05 08:00:31.275046 | LOOP [stage-output : Archive everything from logs] 2026-04-05 08:00:31.327492 | 2026-04-05 08:00:31.327778 | PLAY [Post cleanup play] 2026-04-05 08:00:31.340481 | 2026-04-05 08:00:31.340624 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 08:00:31.396452 | orchestrator | ok 2026-04-05 08:00:31.412800 | 2026-04-05 08:00:31.412950 | TASK [Set cloud fact (local deployment)] 2026-04-05 08:00:31.438446 | orchestrator | skipping: Conditional result was False 2026-04-05 08:00:31.448556 | 2026-04-05 08:00:31.448702 | TASK [Clean the cloud environment] 2026-04-05 08:00:32.032989 | orchestrator | 2026-04-05 08:00:32 - clean up servers 2026-04-05 08:00:32.867881 | orchestrator | 2026-04-05 08:00:32 - testbed-manager 2026-04-05 08:00:32.957365 | orchestrator | 2026-04-05 08:00:32 - testbed-node-3 2026-04-05 08:00:33.052187 | orchestrator | 2026-04-05 08:00:33 - testbed-node-2 2026-04-05 08:00:33.145468 | orchestrator | 2026-04-05 08:00:33 - testbed-node-0 2026-04-05 08:00:33.238563 | orchestrator | 2026-04-05 08:00:33 - testbed-node-5 2026-04-05 08:00:33.404775 | orchestrator | 2026-04-05 08:00:33 - testbed-node-4 2026-04-05 08:00:33.496598 | orchestrator | 2026-04-05 08:00:33 - testbed-node-1 2026-04-05 08:00:33.594119 | orchestrator | 2026-04-05 08:00:33 - clean up keypairs 2026-04-05 08:00:33.609543 | orchestrator | 2026-04-05 08:00:33 - testbed 2026-04-05 08:00:33.642535 | orchestrator | 2026-04-05 08:00:33 - wait for servers to be gone 2026-04-05 08:00:44.745642 | orchestrator | 2026-04-05 08:00:44 - clean up ports 2026-04-05 08:00:44.951360 | orchestrator | 2026-04-05 08:00:44 - 12d030cc-352d-434f-ae0b-c0e15d2664a1 2026-04-05 08:00:45.234715 | orchestrator | 2026-04-05 08:00:45 - 552b39e0-003d-4f31-81b3-2cca3aaf67a7 2026-04-05 08:00:45.546179 | orchestrator | 2026-04-05 08:00:45 - 634634f0-091c-48b3-8273-59795721d20b 2026-04-05 08:00:45.761560 | orchestrator | 2026-04-05 08:00:45 - 8c0a2c06-e9ea-4b81-8ff5-19baa6f1b331 2026-04-05 08:00:45.979218 | orchestrator | 2026-04-05 08:00:45 - a4c0292c-3f94-4c26-8ae0-90bdef4622d1 2026-04-05 08:00:46.376530 | orchestrator | 2026-04-05 08:00:46 - c1da3943-d6f3-4764-b4bf-0a3918ac8d4c 2026-04-05 08:00:46.605387 | orchestrator | 2026-04-05 08:00:46 - d72fff87-d18d-47c7-a944-9c0be84ba78d 2026-04-05 08:00:46.827157 | orchestrator | 2026-04-05 08:00:46 - clean up volumes 2026-04-05 08:00:46.962644 | orchestrator | 2026-04-05 08:00:46 - testbed-volume-4-node-base 2026-04-05 08:00:47.001016 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-3-node-base 2026-04-05 08:00:47.046392 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-1-node-base 2026-04-05 08:00:47.085523 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-0-node-base 2026-04-05 08:00:47.126365 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-5-node-base 2026-04-05 08:00:47.182109 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-2-node-base 2026-04-05 08:00:47.224227 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-5-node-5 2026-04-05 08:00:47.263140 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-7-node-4 2026-04-05 08:00:47.309615 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-1-node-4 2026-04-05 08:00:47.355811 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-3-node-3 2026-04-05 08:00:47.399122 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-manager-base 2026-04-05 08:00:47.443528 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-4-node-4 2026-04-05 08:00:47.486137 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-0-node-3 2026-04-05 08:00:47.527538 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-6-node-3 2026-04-05 08:00:47.570954 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-8-node-5 2026-04-05 08:00:47.615481 | orchestrator | 2026-04-05 08:00:47 - testbed-volume-2-node-5 2026-04-05 08:00:47.658440 | orchestrator | 2026-04-05 08:00:47 - disconnect routers 2026-04-05 08:00:47.773207 | orchestrator | 2026-04-05 08:00:47 - testbed 2026-04-05 08:00:48.855857 | orchestrator | 2026-04-05 08:00:48 - clean up subnets 2026-04-05 08:00:48.911624 | orchestrator | 2026-04-05 08:00:48 - subnet-testbed-management 2026-04-05 08:00:49.085802 | orchestrator | 2026-04-05 08:00:49 - clean up networks 2026-04-05 08:00:49.273413 | orchestrator | 2026-04-05 08:00:49 - net-testbed-management 2026-04-05 08:00:49.566970 | orchestrator | 2026-04-05 08:00:49 - clean up security groups 2026-04-05 08:00:49.606668 | orchestrator | 2026-04-05 08:00:49 - testbed-management 2026-04-05 08:00:49.727750 | orchestrator | 2026-04-05 08:00:49 - testbed-node 2026-04-05 08:00:49.850569 | orchestrator | 2026-04-05 08:00:49 - clean up floating ips 2026-04-05 08:00:49.887395 | orchestrator | 2026-04-05 08:00:49 - 81.163.192.238 2026-04-05 08:00:50.260470 | orchestrator | 2026-04-05 08:00:50 - clean up routers 2026-04-05 08:00:50.857018 | orchestrator | 2026-04-05 08:00:50 - testbed 2026-04-05 08:00:52.008110 | orchestrator | ok: Runtime: 0:00:19.995437 2026-04-05 08:00:52.012389 | 2026-04-05 08:00:52.012552 | PLAY RECAP 2026-04-05 08:00:52.012674 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-05 08:00:52.012735 | 2026-04-05 08:00:52.154968 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-05 08:00:52.157825 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 08:00:53.027240 | 2026-04-05 08:00:53.027480 | PLAY [Cleanup play] 2026-04-05 08:00:53.044189 | 2026-04-05 08:00:53.044326 | TASK [Set cloud fact (Zuul deployment)] 2026-04-05 08:00:53.107533 | orchestrator | ok 2026-04-05 08:00:53.117385 | 2026-04-05 08:00:53.117531 | TASK [Set cloud fact (local deployment)] 2026-04-05 08:00:53.151964 | orchestrator | skipping: Conditional result was False 2026-04-05 08:00:53.162611 | 2026-04-05 08:00:53.162730 | TASK [Clean the cloud environment] 2026-04-05 08:00:54.315291 | orchestrator | 2026-04-05 08:00:54 - clean up servers 2026-04-05 08:00:54.890158 | orchestrator | 2026-04-05 08:00:54 - clean up keypairs 2026-04-05 08:00:54.909937 | orchestrator | 2026-04-05 08:00:54 - wait for servers to be gone 2026-04-05 08:00:54.957675 | orchestrator | 2026-04-05 08:00:54 - clean up ports 2026-04-05 08:00:55.033168 | orchestrator | 2026-04-05 08:00:55 - clean up volumes 2026-04-05 08:00:55.106952 | orchestrator | 2026-04-05 08:00:55 - disconnect routers 2026-04-05 08:00:55.137444 | orchestrator | 2026-04-05 08:00:55 - clean up subnets 2026-04-05 08:00:55.160782 | orchestrator | 2026-04-05 08:00:55 - clean up networks 2026-04-05 08:00:55.320667 | orchestrator | 2026-04-05 08:00:55 - clean up security groups 2026-04-05 08:00:55.364663 | orchestrator | 2026-04-05 08:00:55 - clean up floating ips 2026-04-05 08:00:55.389569 | orchestrator | 2026-04-05 08:00:55 - clean up routers 2026-04-05 08:00:55.708962 | orchestrator | ok: Runtime: 0:00:01.463437 2026-04-05 08:00:55.712914 | 2026-04-05 08:00:55.713104 | PLAY RECAP 2026-04-05 08:00:55.713236 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-05 08:00:55.713299 | 2026-04-05 08:00:55.844149 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-05 08:00:55.845300 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 08:00:56.580544 | 2026-04-05 08:00:56.580713 | PLAY [Base post-fetch] 2026-04-05 08:00:56.596418 | 2026-04-05 08:00:56.596561 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-05 08:00:56.652564 | orchestrator | skipping: Conditional result was False 2026-04-05 08:00:56.669248 | 2026-04-05 08:00:56.669496 | TASK [fetch-output : Set log path for single node] 2026-04-05 08:00:56.709349 | orchestrator | ok 2026-04-05 08:00:56.717608 | 2026-04-05 08:00:56.717750 | LOOP [fetch-output : Ensure local output dirs] 2026-04-05 08:00:57.230445 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/work/logs" 2026-04-05 08:00:57.486578 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/work/artifacts" 2026-04-05 08:00:57.764155 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/03468ae7aa2d4f669cb72cd41f266296/work/docs" 2026-04-05 08:00:57.791758 | 2026-04-05 08:00:57.791947 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-05 08:00:58.748284 | orchestrator | changed: .d..t...... ./ 2026-04-05 08:00:58.748688 | orchestrator | changed: All items complete 2026-04-05 08:00:58.748753 | 2026-04-05 08:00:59.497861 | orchestrator | changed: .d..t...... ./ 2026-04-05 08:01:00.223445 | orchestrator | changed: .d..t...... ./ 2026-04-05 08:01:00.251577 | 2026-04-05 08:01:00.251753 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-05 08:01:00.288936 | orchestrator | skipping: Conditional result was False 2026-04-05 08:01:00.291792 | orchestrator | skipping: Conditional result was False 2026-04-05 08:01:00.310333 | 2026-04-05 08:01:00.310467 | PLAY RECAP 2026-04-05 08:01:00.310549 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-05 08:01:00.310595 | 2026-04-05 08:01:00.439520 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-05 08:01:00.440571 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 08:01:01.198532 | 2026-04-05 08:01:01.198729 | PLAY [Base post] 2026-04-05 08:01:01.214451 | 2026-04-05 08:01:01.214587 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-05 08:01:02.220208 | orchestrator | changed 2026-04-05 08:01:02.228964 | 2026-04-05 08:01:02.229145 | PLAY RECAP 2026-04-05 08:01:02.229217 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-05 08:01:02.229282 | 2026-04-05 08:01:02.356551 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-05 08:01:02.357623 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-05 08:01:03.143799 | 2026-04-05 08:01:03.143968 | PLAY [Base post-logs] 2026-04-05 08:01:03.154956 | 2026-04-05 08:01:03.155153 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-05 08:01:03.640724 | localhost | changed 2026-04-05 08:01:03.666796 | 2026-04-05 08:01:03.667075 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-05 08:01:03.706458 | localhost | ok 2026-04-05 08:01:03.712999 | 2026-04-05 08:01:03.713237 | TASK [Set zuul-log-path fact] 2026-04-05 08:01:03.742237 | localhost | ok 2026-04-05 08:01:03.760594 | 2026-04-05 08:01:03.760773 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-05 08:01:03.800031 | localhost | ok 2026-04-05 08:01:03.807964 | 2026-04-05 08:01:03.808279 | TASK [upload-logs : Create log directories] 2026-04-05 08:01:04.333270 | localhost | changed 2026-04-05 08:01:04.340790 | 2026-04-05 08:01:04.340997 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-05 08:01:04.864865 | localhost -> localhost | ok: Runtime: 0:00:00.007099 2026-04-05 08:01:04.872876 | 2026-04-05 08:01:04.873074 | TASK [upload-logs : Upload logs to log server] 2026-04-05 08:01:05.429835 | localhost | Output suppressed because no_log was given 2026-04-05 08:01:05.431775 | 2026-04-05 08:01:05.431880 | LOOP [upload-logs : Compress console log and json output] 2026-04-05 08:01:05.489129 | localhost | skipping: Conditional result was False 2026-04-05 08:01:05.494302 | localhost | skipping: Conditional result was False 2026-04-05 08:01:05.506540 | 2026-04-05 08:01:05.506770 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-05 08:01:05.553759 | localhost | skipping: Conditional result was False 2026-04-05 08:01:05.554446 | 2026-04-05 08:01:05.557850 | localhost | skipping: Conditional result was False 2026-04-05 08:01:05.572004 | 2026-04-05 08:01:05.572291 | LOOP [upload-logs : Upload console log and json output]